You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now the optimizer loses information across constant values. Say for example
def test_propagate_constants_sources(self):
def thing(unused):
x = 0
for _ in range(100):
x = 1
y = Foo.attr + Foo.attr
# Type information of `Foo_attr` is not propagated to here.
z = Foo.attr + Foo.attr
return x
class Foo:
attr = 1
res, ex = self._run_with_optimizer(thing, 1)
opnames = list(iter_opnames(ex))
self.assertIsNotNone(ex)
guard_type_version_count = opnames.count("_GUARD_BOTH_INT")
# Test fails, because we insert 2 type guards instead of 1 (ie type information is not propagated, and guards are repeated)
self.assertEqual(guard_type_version_count, 1)
After we promote Foo.attr to constants, we don't keep source information, so we don't keep track that the first Foo.attr is the same as the subsequent ones. Then LOAD_CONST_INLINE loads a brand new constant symbol each time, with no information.
Feature or enhancement
Proposal:
Right now the optimizer loses information across constant values. Say for example
After we promote
Foo.attr
to constants, we don't keep source information, so we don't keep track that the firstFoo.attr
is the same as the subsequent ones. ThenLOAD_CONST_INLINE
loads a brand new constant symbol each time, with no information.https://github.com/python/cpython/blob/main/Python/optimizer_bytecodes.c#L422
A possible solution would be to keep some sort of ID for all constant promotions around.
Has this already been discussed elsewhere?
No response given
Links to previous discussion of this feature:
No response
The text was updated successfully, but these errors were encountered: