corneretageres.com

Simulation Optimization Techniques: Exploring Operational Methods

Written on

Simulation Optimization Techniques

Introduction

This article compiles code snippets sourced from various frameworks, including Intel MKL and SceneNet Deep Learning Framework, that utilize numerical and stochastic optimization methods. The C++ code examples provided can be optimized for performance using OpenMP (OMP) or OpenMPI (MPI).

Instructions for Running Code Using OMP

The repository is configured to run with OMP, enabling the use of parallelization and vectorization techniques.

To build the project, execute: cmake .. && make all

To run the application, input the following command: ./app --intervals 4 --iterations 8 --condition_factor 2 --multiplier 1000000

First, install CMake on Linux using: sudo apt-get install cmake

Alternatively, you can clone this Docker repository: git clone https://github.com/aswinvk28/cpp-performance-math-problems cpp_math_problems Then switch to the relevant branch: git checkout navier-stokes-autodiff

Instructions for Running Code Using MPI

Modify the code in the repository to enable MPI execution. Utilize the command line application with mpiexec by executing the provided shell script.

Run the project with: ./run_project.sh

And then execute: mpiexec -n 4 ./app --intervals 4 --iterations 8 --condition_factor 2 --multiplier 1000000

1. You must create an Implementation Function: double * navier_stokes(double * u2, double u0, const double dt, const double dx, const double p, const double alpha, int length, double * model)

2. Reference the Implementation Function: double * navier_stokes_ref(double * u1, double u0, const double dt, const double dx, const double p, const double alpha, int length, double * model)

The implementation function executes at each interval of the specified vector, akin to the reference implementation, which is compared against a manually written model.

Initial Hypothesis

The combinatorial equation of the linearly estimated model is:

Combinatorial Equation

A linear damping factor is applied to the original units per cell vector, satisfying our estimated model, indicating that the observed units per cell meet our assumptions.

Preliminary Analysis

In the preliminary analysis, a macroscopic approach leads to the optimal curve for the units per cell, represented by a power series formula:

In a power series equation, the accuracy score is measured by the mean squared error in polynomial regression, gauged through error vectors from original data points to the predicted curve. In a Support Vector Machine (SVM), these are functional or geometric margins. Assuming each point in a dispersion relation transfers mass to others, a flow function and an analytic complex variable can address our problem, allowing us to express our phase in a power series equation as follows:

Derivative: A? + -A? sin(?) + -2 * A? sin(2?) + -3 * A? sin(3?) + -4 * A? sin(4?) + -5 * A? sin(5?) + …

This creates an analytic function in every direction, generating a flow function that classifiers like SVM can predict. Another set of power series coefficients represents those with shifted phase:

Derivative: B? + B? cos(?) + 2 * B? cos(2?) + 3 * B? cos(3?) + 4 * B? cos(4?) + 5 * B? cos(5?) + …

We define our "Flow Model" as: f(x, t): the state space model

Cost Function

Cost Function

Performing gradient descent on the “Reference Implementation Function” sets the analysis context as discrete, with varying interval sizes producing model conformance checks.

Resulting coefficients will be a Fourier Transform, specifically, an Inverse Short Time Fourier Transform (STFT) to the time domain.

Popular Python packages for STFT include: - numpy - librosa - scipy - tensorflow

Other packages for STFT are available.

This article from Intel discusses the Consistency of Floating Point Arithmetic using C++ Compilers. FP (Floating-Point) Accuracy must remain consistent across all GPUs and operating systems. To achieve this, the properties of addition and multiplication must adhere to associative principles, allowing floating point numbers to be serialized for execution across various threads and vector units (VU). The problems demonstrated utilize OMP (OpenMP) and OpenMPI for evaluating a computational model. The chosen computational models for the exercises are in C++ and Python, addressing: - Monte Carlo Diffusion as a Dispersion Relation - Lattice Boltzmann Method - Design of Experiments - Complex Analysis

Results Obtained by Executing the Code for the Given Parameters

Mode 1: When the quantization factor is set to a default fraction: qf = (MAX. ESTIMATED VALUE) * 2 / 3 You can adjust the Max. estimated value of the model as Force Term and assess the outliers based on the chosen fraction of the max. estimated value.

Run the application with: ./app --intervals 4 --iterations 8 --condition_factor 2 --multiplier 1

Concordance, Reliability, Residual Error, Precision

Code Examples

./app --intervals 3 --iterations 7 --condition_factor 2 --multiplier 1 --quantisation_factor 2e-2

Concordance, Reliability, Residual Error, Precision

The variation in interval lengths affects precision, reliability, and concordance. To interpolate our hypothesized units per cell parameter, calibrated for Length = 100, a spline interpolation is used with control points ? and 1 - ?.

Setting the Goal

Precision The precision varies between both models due to: --intervals 4 --iterations 8 being used in model1 --intervals 3 --iterations 7 being used in model2

Performance improvements are performed with each interval (of lengths 2? and 2³ respectively) considered for our computational model. We take the Mean Absolute Error (MAE) at each interval to finalize our results.

Reliability Reliability values fluctuate between models, as intervals vary with precision changes. Reliability is defined as the probability of failure across N experiments.

Reliability Analysis

Residual The model value changes by minor deviations over an interval. The deviations, normalized by the model value, yield the residual property of the model within that interval.

Residual Analysis

Represented as a condition number, it utilizes an N-norm in the numerator multiplied by the inverse of the matrix. For large intervals, this statistic is relevant as it provides a Taylor series expansion.

Concordance In the model, the force term represents: ? ( ?u / ?t ). The derivative of the force term, V(?) = ( 1 / ? ) ( ?F / ?x ), represents the intervals.

The concordance statistic considers a numeric value and a complex argument. The numeric value corresponds to ( ?x / x ) while the complex argument relates to z? / z?, where the complex number is the product of contour integral of the flow function and the complex derivative.

Inference from Statistical Analysis

Interval analysis is crucial for understanding model parameters such as precision, residual error, condition numbers, reliability models, and concordance.

As stated in “Preliminary Analysis”, the power series parameter ?, over the interval from a to b with interval Length L = ( b - a ), is represented as: ? = 2? n x / L In the Taylor Series expansion, the k?? derivative will include a normalization parameter n? due to the spline interpolation with control points ? and 1 - ? at the interval for every derivative, maintaining the same form: f(k) = ? f? + (1??) f??¹ Utilizing the Taylor Series expansion for deviations leads to a birthday function by Ramanujan. The birthday problem of recursion addresses the number of trials a person can make before encountering another individual sharing the same birthday in a year of N days. Further details can be found in the [Birthday Problem](https://en.wikipedia.org/wiki/Birthday_problem).

Birthday Problem Normalized by Interval Length

The birthday problem is relevant for deriving Ramanujan’s distributions, including: - Ramanujan Q-distribution - Ramanujan R-distribution

The Ramanujan R-distribution is particularly useful in our macroscopic problem, where iterating the intervals reveals that the birthday problem can create a generating function of recorded deviations, making it a probabilistic problem of a permutation matrix.

In our context, applying Optical Flow Phase-Based methods to every derivative of our flow function yields two options during the data wrangling stage:

  1. Consider coefficients A? as an ordered set of real-valued variables, and derivatives as k unordered sets of complex-valued functions, representing a matrix of complex derivatives and coefficients to solve a graph problem.
  2. Treat coefficients A? as part of a time series continuum, performing randomization to select the complex-valued state space by matching the k World Traces in pairs, where the j?? derivative may align with an i?? derivative.

The paper [Multi-Match Algorithm](https://arxiv.org/pdf/1506.02335.pdf) describes a process that matches random permutations in a state space of rank (N+k), aligning k elements to specific k elements.

The Ramanujan R-distribution demonstrates the probability that matches can occur without repeating the structure of two columns of coefficients throughout, facilitating the problem resolution by independently considering flow function derivatives as a permutation matrix problem.

The binomial distribution is represented as the product of Ramanujan Q-distribution and Ramanujan R-distribution.

The significance of interval length is substantial, as whether dealing with momentum conservation models, k?? derivatives, energy, or Hamiltonian mechanics, the same spline interpolation is employed for this macroscopic problem.

The autodiff package facilitated my deduction of derivatives, excelling in forward and backward differentiation.

@Courtesy: [https://autodiff.github.io/](https://autodiff.github.io/)

According to the paper [Analyzing Datasets](https://arxiv.org/pdf/1801.06397.pdf) using Optical Flow methods, the results indicate two aspects: 1. Diversity 2. Realism

A network trained on specialized data generalizes poorly to other datasets compared to one trained on diverse data. Additionally, most learning tasks can be efficiently tackled using simple data and data augmentation.

Three operational simulation optimization techniques will be discussed, showcasing various methods to attain optimal solutions.

Computational Models

Monte Carlo (MC) Diffusion as a Dispersion Relation In Monte Carlo Diffusion, the diffusion probability can be expressed as the sum of probabilities of air entering and exiting a designated point.

Monte Carlo Diffusion

Problem: Given a computational model with a brief time interval and small displacements occurring over time when air input velocity is supplied.

With input air, the mass flux can be articulated as:

Mass Flux Equation

This linear equation, when computed alongside the dynamics of air's Kinetic Energy (K.E.), establishes a relationship between space and time:

Navier-Stokes Relation

The model variables include: - Number of units per cell - Probability of movement - Dimensionless constant of air density rate - Input time series of fluid velocity

In deriving the Navier-Stokes model, Fick’s diffusive flux and Gibb’s free energy equations are utilized. The dimensionless air density rate parameter remains constant during computation, assuming synchronicity with the units per cell parameter. A major limitation of this model is that it distributes linearly from 1 to 0, neglecting bouncing back or shear stress on walls analyzed within a unit segment.

There is dispersion of particles due to the large input signal.

Stochastic Optimization (SO) Using stochastic gradient descent with the TensorFlow library, the computational model's parameters are determined within a unit segment length. An input velocity time series of t^5 is employed for the Navier-Stokes relation.

Velocity Computation

The computed velocity decreases linearly from approximately 0.2 to 0.1. The total significant points are taken to be 100. Other parameters in the equation include:

  1. Points of significant velocity observations plotted as scatter plots:

    Significant Velocity Observations
  2. The initial velocity:

    Initial Velocity
  3. The probability:

    Probability
  4. The alpha value:

    Alpha Value

The displacement and time parameters for the computational model are: dt = 1e-3 dx = 1e-6 By calibrating results from Monte Carlo Diffusion, a consistent pattern emerges for the volumetric relationship of units per cell.

Gaussian Kurtosis Explanation

This is expected, as the original model demonstrated that the Kurtosis of the original Navier-Stokes relation exceeds that of the volumetric relationship of units per cell in the computational model.

Kurtosis Analysis

To observe the entire cuboid of air within our bounded context at the same time, the interval where this pattern recurs must be identified. Such intervals are demonstrably bound to exist due to a sizing problem. It is feasible to illustrate such intervals of Kurtosis for the speed of sound at 343 m/s, calibrated within every cuboid of designated space.

Lattice Boltzmann Method (LBM) In the Lattice Boltzmann method, the fluid is discretized into eight velocity vectors and one rest velocity at any given time. The method determines the speed of sound by:

Speed of Sound in LBM

The energy of the Boltzmann Lattice Model depends on density, velocity, and velocity vector, with speed of sound as an additional parameter. Below is a demonstration of the Lattice Boltzmann Method applied to an obstacle.

LBM Demonstration

Code Examples The energy of the Lattice Boltzmann Model relies on input/output direction, density, or velocity acting on the particle and the velocity vectors from each direction.

Simulation Optimization Model: Evolutionary Strategies (ES)

Evolutionary Strategies

The model based on input time series evolves combinatorially as described below:

Code sample for LBM: aswinvk28/doe-response-surface/lattice-boltzmann.py A fraction of 1/9 is allocated to orthogonal directions, 1/36 to diagonal directions, and 4/9 to the rest velocity.

Design of Experiments The design of experiments can be installed using: pip install --user pyDOE2 This encompasses a variety of techniques developed to simulate a given world trace. The experiments created include Factorial Designs, Response-Surface Designs, and Randomized Designs.

Factorial Designs: 1. General Full-Factorial (fullfact) 2. 2-Level Full-Factorial (ff2n) 3. 2-Level Fractional-Factorial (fracfact) 4. Plackett-Burman (pbdesign)

Response-Surface Designs: 1. Box-Behnken (bbdesign) 2. Central-Composite (ccdesign)

Randomized Designs: 1. Latin-Hypercube (lhs)

Code Examples Simulation Optimization Model: Response Surface Methodology Using Response Surface Methodology, a series of regression models are fitted against provided input parameters. The Design of Experiments computational model can run a set of experiments that interpolate data based on specific characteristics such as L2 Norm and flow characteristics used for modeling the state space.

In this case, the MNIST digits and EMNIST characters are simulated from ground truth data using two extreme intervals, specifically MNIST digits 7 and 5 through the Randomized Designs (Latin Hypercube) method. The interpolating characteristic used here is derived from the experiment matrix and the difference between experiment value and the ground truth, with a degree of 1.

MNIST Digits Transformations

The code repository for design of experiments is given below: aswinvk28/doe-response-surface

Circulation and Flux in Complex Analysis In Complex Analysis, analytic complex variables are represented as follows:

Complex Variables Representation

If the complex function is harmonic, they are represented as:

Harmonic Complex Functions

Suppose there exists a flow function defined by:

Flow Function

The circulation and flux along specified contour lines are defined as:

Circulation and Flux

Optical Flow Analysis Using Phase-Based Methods Through phase-based methods, the spatial and temporal properties of an image are extracted. Optical Flow analyses comprise methods capable of image segmentation and determining constraints in video, among others. Below is a repository that implements optical flow techniques to predict differential relations in an image. Optical flow derivatives are extracted using the SceneNet framework, transforming the image into a defined size while applying horizontal and vertical fields of view.

Packages capable of performing Optical Flow include:

  • Optical Flow - OpenCV-Python Tutorials 1 Documentation: An introduction to concepts of optical flow and its estimation using the Lucas-Kanade method.
  • SceneNet RGB-D: A collection of 5 million photorealistic images of synthetic indoor trajectories with perfect ground truth.
  • scivision/pyoptflow: Python implementation of optical flow estimation using the Scipy stack.
Optical Flow Changes Optical Flow Image Changes Optical Flow Visualization Temporal Changes

How is this generated? Simply multiply the complex conjugate of a 2-channel 2D band-pass signal with optical flow derivatives as a complex number, yielding a real part, imaginary part, and arctan2 component. To depict the optical flow derivatives as a spatial component, consider them by using moving images, resolving them into X and Y axes.

The code repository for optical flow analysis is provided below: aswinvk28/optical-flow-analysis

Conclusion

In conclusion, Combinatorics analyzes an algorithm, generalizing a problem regarding the sizing of the state space and model complexity, ultimately solving the representation of the state space. Complex Analysis establishes the temporal and spatial components or data complexity. The Design of Experiments method utilizes social graphs to enable standardized data simulation. Monte Carlo methods, being sampling techniques, are applied across various use cases and effectively combine with other problem-solving techniques such as Hamiltonian mechanics (MCMC with Hamiltonian).

This paper on the Variations of Gradients in Deep Learning Networks is particularly engaging, as it discusses activation patterns across several nodes within the neural network. It also notes that significant variations within a deep neural network occur when layers with fewer nodes alter their activation patterns. Observations indicate that deep neural networks (DNNs) achieve their Complexity and Invariancy simultaneously.