I've been testing a method of circuit simulation where I assign an initial voltage to each node, set the current across each wire according to Ohm's law, and then adjust the voltage of each node proportional to the net current inflow (which I think is correct under certain assumptions). At each timestep, I set the voltages near the voltage source to zero at the negative terminal and the desired voltage at the positive terminal.
The circuit appears to converge to some equilibrium state, and in the small number of tests cases that I've examined by hand (small voltage and current dividers), the steady state appears to be correct. And the update rule is linear (aside from the behavior at the voltage source, which I think could be improved), which makes the process resemble the power method for dominant eigenvector approximation, so the convergence properties should be well-behaved and the idea should generalize to other linear circuit elements.
However, I haven't been able to find any name or record of widespread use for this technique. It seems like a straightforward derivation that sidesteps the quadratic complexity of nodal analysis at the expense of being an approximation, so I think there would be some precedent for its use. Is this the case? I don't want to spend time reinventing the wheel. As far as I can tell most circuit simulators find steady states by directly solving a system of equations, but I'm interested in the case of extremely large circuits.