Great idea @Andre_K! I asked chatgpt how it might be possible to do this in javascript and it suggested using the levenberg marquardt algorithm, there’s a node library but looks maybe a bit heavy weight to adapt and integrate in the app… any thoughts on how I could approach this would be welcome!
I’ll look into this. Levenberg-Marquardt is a common curve fitting algorithm, but I’m sure we’ll find something lightweight that works in the app. I’ll let you know my findings!
Just a general question as I’m a total JS noob. If I install node.js and create a working prototype project for the fitting - would that be of help? I’d probably install one or two packages to help via npm. If the outcome would be unusable because of some architectural constraints please give me a hint what else I should base the test on.
It needs to be client side javascript run in the web browser rather than server side node js. This library here only gives examples for server side node js GitHub - mljs/levenberg-marquardt: Curve fitting method in JavaScript. It would need something similar implemented in the client side context…
I did ask ChatGPT to write the whole thing and it came up with this https://chatgpt.com/share/67221475-64b8-8010-a5e5-003f4ae653fc
Copying this verbatim without an understanding of the implementation though seems a bit dodgy. I usually like to verify as best I can what ChatGPT writes ![]()
function levenbergMarquardt({ xData, yData, initialParams, model, damping = 1e-2, maxIterations = 100, tolerance = 1e-6 }) {
const n = xData.length; // Number of data points
const m = initialParams.length; // Number of parameters
// Initialize parameters
let params = [...initialParams];
let lambda = damping; // Initial damping parameter
// Helper function to calculate residuals
function calculateResiduals(params) {
return xData.map((x, i) => yData[i] - model(params)(x));
}
// Helper function to calculate the Jacobian matrix
function calculateJacobian(params) {
const jacobian = Array.from({ length: n }, () => Array(m).fill(0));
const delta = 1e-8; // Small step for numerical derivative
const f0 = calculateResiduals(params); // Residuals at current params
for (let j = 0; j < m; j++) {
// Perturb parameter j
const paramsDelta = [...params];
paramsDelta[j] += delta;
const f1 = calculateResiduals(paramsDelta);
for (let i = 0; i < n; i++) {
jacobian[i][j] = (f1[i] - f0[i]) / delta;
}
}
return jacobian;
}
// Helper function to calculate the Jacobian transpose
function transpose(matrix) {
return matrix[0].map((_, colIndex) => matrix.map(row => row[colIndex]));
}
// Matrix multiplication helper
function matMul(A, B) {
return A.map(row => B[0].map((_, i) => row.reduce((sum, el, j) => sum + el * B[j][i], 0)));
}
// Diagonal matrix helper for damping
function addDamping(matrix, lambda) {
return matrix.map((row, i) => row.map((val, j) => (i === j ? val + lambda : val)));
}
// Main optimization loop
for (let iteration = 0; iteration < maxIterations; iteration++) {
const residuals = calculateResiduals(params);
const error = residuals.reduce((sum, r) => sum + r * r, 0) / 2;
if (error < tolerance) break; // Convergence check
const jacobian = calculateJacobian(params);
const JT = transpose(jacobian);
const JTJ = matMul(JT, jacobian);
const JTr = JT.map(row => row.reduce((sum, val, i) => sum + val * residuals[i], 0));
const JTJ_Damped = addDamping(JTJ, lambda);
let step = numeric.solve(JTJ_Damped, JTr); // Solve for parameter step
const newParams = params.map((p, i) => p - step[i]);
const newResiduals = calculateResiduals(newParams);
const newError = newResiduals.reduce((sum, r) => sum + r * r, 0) / 2;
// Check if the step reduced the error
if (newError < error) {
params = newParams;
lambda /= 10; // Decrease lambda if successful step
} else {
lambda *= 10; // Increase lambda if step was not successful
}
if (Math.abs(newError - error) < tolerance) break; // Convergence check
}
return params;
}
// Usage Example:
// Define the model function (e.g., linear model y = ax + b)
const model = ([a, b]) => (x) => a * x + b;
// Define data points
const xData = [1, 2, 3, 4, 5];
const yData = [2, 4.1, 5.9, 8.05, 9.9];
// Initial guess for parameters [a, b]
const initialParams = [1, 1];
// Run the Levenberg-Marquardt algorithm
const fittedParams = levenbergMarquardt({
xData,
yData,
initialParams,
model,
damping: 0.01,
maxIterations: 100,
tolerance: 1e-6
});
console.log("Fitted Parameters:", fittedParams);
I never wrote the actual optimization backend myself because those algorithms are tricky.
One issue is also that I think we need a constrained optimization since the expression can become undefined for e.g. n=1.
One brute-force approach could be this:
- Don’t fit T0 but fix it at the first (max) value. Then the optimization is 2D.
- Brute force iterate over k and n. We can just try out what sensible ranges are.
- Pick the k and n combo that fits best
Here we go - finally a time to use that undergrad physics stuff from 20+ years ago! ![]()
We can just take the log of the original differential equation. Then we have
log(d(deltaT)/dt) = log(-k*deltaT^n) = log(-k) + n log(deltaT). This is a linear equation in the log of the temperature difference. Now the steps to solve this are super simple and require no fancy libraries. Here’s how I did it:
- Select an appropriate data window. Discard values that are too close to room temperature (deltaT < 2, but this can be experimented with)
- We have to work with a numerical derivative which amplifies noise. Hence we smooth the data - not too much to drown out the time dependence. I just used a simple gaussian with adjustable window size & variance. Again, up to experimentation, a simple running average would do.
- We compute the numerical derivative (central differences)
- Take the log of the differential and of deltaT and perform a simple linear regression that we can recover k and n from.
The least-squares fit is better because it does not require those noisy differences and is also better in terms of reliability, but the results are very close and definitely good enough for our purpose. As far as I see, no libraries or fancy fits are needed - this should work nicely on the client side. I have attached my Python code. Beware, lots of ChatGPT but I did compare the results to the previous fit and it seems they make sense. However, things like array misalignments or unconsidered edge artifacts in smoothing might still be in there.
fit_n_log_regression.py.txt (5.8 KB)
Edit:
One other advantage this approch has is that we can just use data from multiple off-cycles without any modification. Just accumulate all flow temperatures and derivatives during compressor-off and do the regression - no need for the values to be from the same cycle.