Modeling in the sciences is in fact almost always generative modeling. What a BOLD statement to start a paper with here --> 1.1 Motivation One major division in machine learning is generative versus discrimi- native modeling. While in discriminative modeling one aims to learn a predictor given the observations, in generative modeling one aims to solve the more general problem of learning a joint distribution over all the variables. A generative model simulates how the data is generated in the real world. “Modeling” is understood in almost every science as unveiling this generating process by hypothesizing theories and testing these theories through observations. For instance, when meteorologists model the weather they use highly complex partial differential equations to express the underlying physics of the weather. Or when an astronomer models the formation of galaxies s/he encodes in his/her equations of motion the physical laws under which stellar bodies interact. The same is true for biologists, chemists, economists and so on. Modeling in the sciences is in fact almost always generative modeling
Rohit Dhankar’s Post
More Relevant Posts
-
📃Scientific paper: Noise in the reverse process improves the approximation capabilities of diffusion models Abstract: In Score based Generative Modeling (SGMs), the state-of-the-art in generative modeling, stochastic reverse processes are known to perform better than their deterministic counterparts. This paper delves into the heart of this phenomenon, comparing neural ordinary differential equations (ODEs) and neural stochastic differential equations (SDEs) as reverse processes. We use a control theoretic perspective by posing the approximation of the reverse process as a trajectory tracking problem. We analyze the ability of neural SDEs to approximate trajectories of the Fokker-Planck equation, revealing the advantages of stochasticity. First, neural SDEs exhibit a powerful regularizing effect, enabling $L^2$ norm trajectory approximation surpassing the Wasserstein metric approximation achieved by neural ODEs under similar conditions, even when the reference vector field or score function is not Lipschitz. Applying this result, we establish the class of distributions that can be sampled using score matching in SGMs, relaxing the Lipschitz requirement on the gradient of the data distribution in existing literature. Second, we show that this approximation property is preserved when network width is limited to the input dimension of the network. In this limited width case, the weights act as control inputs, framing our analysis as a controllability problem for neural SDEs in probability density space. This sheds light on how noise helps to steer the system towards the desired soluti... Continued on ES/IODE ➡️ https://etcse.fr/Ipnh ------- If you find this interesting, feel free to follow, comment and share. We need your help to enhance our visibility, so that our platform continues to serve you.
To view or add a comment, sign in
-
MIT Researchers Introduce PFGM++: A Groundbreaking Fusion of Physics and AI for Advanced Pattern Generation Quick Read: https://lnkd.in/gtzXaZS2 Paper: https://lnkd.in/eQHy2C55 If you like our work, you will love our newsletter: https://lnkd.in/gTkg4N_D #artificialintelligence #datascience #machinelearning #physics #ai
MIT Researchers Introduce PFGM++: A Groundbreaking Fusion of Physics and AI for Advanced Pattern Generation
https://www.marktechpost.com
To view or add a comment, sign in
-
Diffusion and Poisson flow models have a lot in common, besides being based on equations imported from physics. During training, a diffusion model designed for image generation typically starts with a picture — a dog, let’s say — and then adds visual noise, altering each pixel in a random way until its features become thoroughly shrouded (though not completely eliminated). The model then attempts to reverse the process and generate a dog that’s close to the original. Once trained, the model can successfully create dogs — and other imagery — starting from a seemingly blank canvas. Poisson flow models operate in much the same way. During training, there’s a forward process, which involves adding noise, incrementally, to a once-sharp image, and a reverse process in which the model attempts to remove that noise, step by step, until the initial version is mostly recovered. As with diffusion-based generation, the system eventually learns to make images it never saw in training. But the physics underlying Poisson models is entirely different. Diffusion is driven by thermodynamic forces, whereas Poisson flow is driven by electrostatic forces. The latter represents a detailed image using an arrangement of charges that can create a very complicated electric field. That field, however, causes the charges to spread more evenly over time — just as milk naturally disperses in a cup of coffee. The result is that the field itself becomes simpler and more uniform. But this noise-ridden uniform field is not a complete blank slate; it still contains the seeds of information from which images can be readily assembled.
New ‘Physics-Inspired’ Generative AI Exceeds Expectations | Quanta Magazine
quantamagazine.org
To view or add a comment, sign in
-
Property prediction for materials under realistic conditions has been a long-standing challenge within the digital transformation of #materials #design. #MatterSim investigates #atomic #interactions from the very fundamental principles of #quantum #mechanics. https://msft.it/6043YXBKV #AI #Microsoft #MicrosoftResearch
Deep-learning model aims for precise and realistic materials simulation
https://www.microsoft.com/en-us/research
To view or add a comment, sign in
-
My latest research on Generative Design as part of my PhD programme at Universidad Nebrija Escuela Politécnica Nebrija has just been published with Springer Nature Group in "Structural and Multidisciplinary Optimization" journal. This paper proposes a combination between a geometric deep learning architecture and a genetic algorithm to solve an aerodynamic shape optimization problem. It handles non-parametric 3D geometries in the form of triangle meshes, including datasets with fle formats such as *.stl, *.ply, and *.obj. On the one hand, the machine learning framework is composed of a pre-processing frame in charge of normalizing and homogenizing all meshes to make them suitable to be processed as graphs by the GVAE generative model. On its basis, the proposed neural network employs spatial spline-based and Chebyshev spectral graph convolutional operators in order to capture the geometrical differentiating features between all the meshes, being able to properly embed this shape variability in the latent space. On the other hand, the NSGA-II is the agent that guides the optimization process. With the objective of finding the optimal solution given a set of performance indices, it is in charge of generating new candidates by sampling the low-dimensional latent space and analyzing their aerodynamic potential through the connection with ANSYS Fluent for CFD calculations. As a result, the model is able to automatically generate geometries that combines features of those of the initial dataset, even able to improve their performance. A set of Pareto fronts with all available solutions is generated at the end so as to decide which solution or set of solutions are the most suitable for a determinate application. Although this article is available under subscription, as part of the Springer Nature Content Sharing Initiative, I am happy to share with you this view-only version through the following link: https://rdcu.be/dzHAP Hope you enjoy it.
Aerodynamic shape optimization using graph variational autoencoders and genetic algorithms - Structural and Multidisciplinary Optimization
link.springer.com
To view or add a comment, sign in
-
Creating a mathematical theory and proof for such an advanced concept is highly speculative and not feasible in a real-world context with our current understanding of physics and computational power. However, in a purely theoretical framework, we might consider the following concepts: 1. **Hamiltonian Function for Quantum System**: Define a Hamiltonian \( \mathcal{H} \) that encodes the total energy of the system, including the cost function of the traveling salesman problem and the Ramsey constraints. \[ \mathcal{H} = \mathcal{H}_{\text{TSP}} + \mathcal{H}_{\text{Ramsey}} + \mathcal{H}_{\text{Market}} \] 2. **Quantum Annealing**: Use a quantum annealing process to minimize \( \mathcal{H} \), seeking the ground state that represents the optimal solution. 3. **AI-Driven Feedback Loop**: An AI system uses feedback loops to adjust the parameters of \( \mathcal{H} \) based on real-time data, represented by directional sound and light patterns. 4. **Quantum Monte Carlo Simulation**: Implement a Monte Carlo method that uses quantum superposition and entanglement to simulate various outcomes and calculate an expected value for the optimization problem. The "proof" would be demonstrating that the quantum annealing process converges to the ground state of \( \mathcal{H} \), providing an optimal trading strategy under the constraints given. This would require extensive simulation to verify and is beyond current quantum computational capabilities.
To view or add a comment, sign in
-
Generative pretraining has pretty much burned through the majority of commodity natural text and images available in digital format, and it’s only taken less than 2-5 years to do so. Short of achieving general reasoning capability, AI’s impact in specialized domains will be inversely proportional to the cost of data and proportional to the degree of accessibility to application builders and researchers. Non-commodity datasets are and will become massive IP assets. As an example seismic exploration data cost up to $75k/sq mile or $37 million for a single 500 sq mile survey (boats need operators, raw data needs preprocessing compute, physics simulation, and SME-in-the-loop). The entire field of computational geophysics is devoted to seismic data acquisition and processing. What other domains fall in this category? At the dawn of the industrial revolution we predicted that energy would become cheap and information expensive. Once the internet came along, history has shown the opposite. One can summon almost any bit of information they need at will from the smartphone in their pockets. But your daily commute, heating bill, and Thanksgiving reunion all take out a much larger chunk out of your paycheck than your digital services. Whereas physical tasks once relegated to humans were replaced by heat, intellectual work once relegated to humans will become displaced, this time data and bits will be the new fuel.
To view or add a comment, sign in
-
Here is another misuse of Artificial Intelligence If the science and physics of the state of the elements (solid, liquid, vapor, and plasma) is already settled and understood, why people insist in using machine learning to rediscover what we alll already know? To me it is a symptom that pseudo-AI companies are pushing researchers to use their software to recreate physics with only data. I have seen this happening in respected oil and gas companies. Usually this occurs when the IT departments take too much control of the data science and machine learning workflows that engineers use every day. My message is this: do not succumb or fall in the temptation of so-called AI shiny software. Know and learn you science and physics first. Keep in mind that marketing or sales people are not in their majority engineers or scientists. #artificialIntelligence #AI #spe #petroleumEngineering Massachusetts Institute of Technology
Scientists use generative AI to answer complex questions in physics
news.mit.edu
To view or add a comment, sign in
-
Matrices are a fundamental mathematical concept with a wide range of applications across various fields, making them a crucial tool for problem-solving and data representation. In this post, we'll dive into the world of matrices, exploring what they are, how they work, and their practical applications. 🔍 Understanding Matrices: A matrix is essentially a two-dimensional array of numbers or symbols arranged in rows and columns, forming a rectangular grid of data. Each element in a matrix is identified by its row and column indices, enabling efficient organization and manipulation of information. 🔢 Matrix Operations: Matrices can be operated on in several ways: Addition and Subtraction: Matrices of the same dimensions can be added or subtracted element-wise. Scalar Multiplication: You can multiply a matrix by a scalar, which scales each element by that value. Matrix Multiplication: A more intricate operation, involving the multiplication of rows and columns of two matrices to yield a new matrix with specific rules. 🚀 Applications of Matrices: Matrices find extensive use in numerous fields: Physics: They are employed to represent operators and observables in quantum mechanics. Computer Graphics: Vital for 3D transformations and graphics rendering. Economics: Input-output matrices analyze complex economic relationships. Machine Learning: The backbone of algorithms like linear regression, neural networks, and image processing. Cryptography: Used in encryption algorithms, such as the Hill cipher. Engineering: Essential in structural analysis, control systems, and signal processing. Matrices are a cornerstone of problem-solving across diverse domains. Whether you're exploring data, solving physical equations, creating stunning graphics, or developing cutting-edge AI models, a solid grasp of matrices is a valuable asset. So, let's continue to unlock the potential of these mathematical wonders in our quest for innovation and understanding. 🌟 #Matrices #Mathematics #DataScience #Innovation #ProblemSolving 🙂
To view or add a comment, sign in
-
MIT Researchers Unveil PFGM++: Melding Physics and AI for Cutting-Edge Pattern Generation #modelresilience #AI #artificialintelligence #controlledexperiments #diffusionmodels #generativemodeling #imagequality #llm #machinelearning #parameterD #PFGM++ #posttrainingquantization #Science
MIT Researchers Unveil PFGM++: Melding Physics and AI for Cutting-Edge Pattern Generation
https://multiplatform.ai
To view or add a comment, sign in
Associate Manager ML at Accenture
1moSOURCE -- https://arxiv.org/pdf/1906.02691 Modeling in the sciences is in fact almost always generative modeling. What a BOLD statement to start a paper with here --> 1.1 Motivation One major division in machine learning is generative versus discrimi- native modeling. While in discriminative modeling one aims to learn a predictor given the observations, in generative modeling one aims to solve the more general problem of learning a joint distribution over all the variables. A generative model simulates how the data is generated in the real world. “Modeling” is understood in almost every science as unveiling this generating process by hypothesizing theories and testing these theories through observations. For instance, when meteorologists model the weather they use highly complex partial differential equations to express the underlying physics of the weather. Or when an astronomer models the formation of galaxies s/he encodes in his/her equations of motion the physical laws under which stellar bodies interact. The same is true for biologists, chemists, economists and so on. Modeling in the sciences is in fact almost always generative modeling