Questions tagged [bayesian-optimization]
Bayesian optimization is a family of global optimization methods which use information about previously-computed values of the function to make inference about which function values are plausibly optima. Its applications include computer experiments and hyper-parameter optimization in some machine learning models.
192
questions
0
votes
0
answers
15
views
Understanding Bayesian Optimal Experiment Design
I read this tutorial on Bayesian Experimentation Design (https://pyro.ai/examples/working_memory.html) and I'm trying to wrap my head around it.
Suppose you have data (X,y).
You're thinking about ...
0
votes
0
answers
12
views
Expectation over cost-normalized Expected improvements
Are the following two expressions equivalent if we assume the independence of f(x) and C(x)?
$$
E\left[\frac{E\left[\max\left(f(x) - f(x^*), 0\right)\right]} {C(x)}\right]
$$
$$
\frac{E\left[\max\...
0
votes
1
answer
28
views
Maximum likelihood estimation and bayesian inference of variance given multiple datasets
I'm currently working on a problem were I have multiple normal distributed data sets $X_1, \dotsc,X_n$ with each data set having it's own mean $\bar x_i $ but all have the same variance $\sigma$. The ...
0
votes
0
answers
19
views
How should uncertainties be treated when scaling data for optimisation
I have a large dataset for which I am using Bayesian statistics for parameter estimation and model selection (using MultiNest for more detail).
This involves setting a prior over which the nested ...
1
vote
1
answer
50
views
What exactly are we training across different iterations in the Gaussian Process Regression example in GPyTorch?
I am following this tutorial to implement a GP Regression using gPyTorch.
Based on my understanding of GP Regression, given the training data we can compute the posterior mean and covariance using the ...
2
votes
0
answers
64
views
Bayesian optimization for parametric curve fitting?
I am relatively new to Gaussian Processes and Bayesian Optimization. My question is very simple:
Suppose I am trying to learn a function from a parametric family of curves which best describes the ...
3
votes
2
answers
143
views
How to choose a point that has both optimal value and low variance
I have a Gaussian Process Regression model that models the cost of a certain process. Once trained, I want to find the point $x$ corresponding to which the regression predicts the lowest cost.
Simply ...
2
votes
1
answer
129
views
Bayesian optimization for solving least squares
Bayesian optimization with Gaussian processes (GPs) is an effective minimization methodology when the evaluation of the function to minimize, say $f(a)$, is computationally expensive.
Loosely speaking,...
0
votes
0
answers
13
views
A surrogate function for validation error in order to perform hyper-parameter optimization?
Greed search CV or few other approaches may be computationally expensive in hyper-parameter tuning. Is it possible to come up with a surrogate model or a purposed model for a validation error in order ...
4
votes
0
answers
211
views
Bayesian Optimization: number of iterations as function of search space dimensionality?
I am performing Bayesian Optimization to select a hyperparameter configuration for my supervised learning model. I understand that with each additional hyperparameter that I choose to optimize, the ...
1
vote
0
answers
40
views
Rounding Approximation for Blackbox Integer Optimization
I am working on a black-box optimization that involves surrogate modeling. Some of my decision variables are integers, but I doubt a MIP approach would work for my case.
My advisor told me that it is ...
1
vote
0
answers
29
views
Expected improvement for bayesian linear regression with unknown noise variance
My question is basically if the expected improvement for a bayesian linear regression with unknown noise variance, i.e. we place a prior on the noise variance -> predictive distribution may not be ...
11
votes
2
answers
1k
views
Why go through the trouble of expectation maximization and not use gradient descent?
In expectation maximization first a lower bound of the likelihood is found and then a 2 step iterative algorithm kicks in where first we try to find the weights (the probability that a data point ...
0
votes
0
answers
65
views
Tuning Random Forest results in max_features parameter taking a value of 1. Why?
I did a bayesian optimization tuning for parameters of random forest. With 200 iterations, it seems like 70% of the times, very low values (read 1 or 2) of max_features seems to produce better (...
0
votes
0
answers
23
views
Calculate acceptance ratio of Jacobian of split-merge RJMCMC
I am keep studying the RJMCMC and want to ask question regarding the acceptance ratio of split/merge step of RJMCMC
The split/merge step, suggested by Richardson and Green (1997) is following for w_j, ...