Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

19
  • \$\begingroup\$ In some regards you seem to have pessimised rather than optimised. The only thing which really looks like an optimisation is allowing bits to clash by using ulong and letting the shift wrap instead of using BigInteger. \$\endgroup\$ Commented Nov 6, 2014 at 17:06
  • \$\begingroup\$ @PeterTaylor The most important optimisation is in HasPropertyX function. The function returns as soon as the first collision of column sums is found (unlike your scoreLyndonWord function). I've also sorted the column masks in such a way that we first check the column sets that are more likely to collide. These two optimisations improved the performance by an order of magnitude. \$\endgroup\$ Commented Nov 6, 2014 at 17:24
  • \$\begingroup\$ Although performance changes are often surprising, in principle the early abort shouldn't give more than a factor of 2, and GetSumOfColumns adds an extra loop which I would expect to cost more than that factor of 2. The mask sorting sounds interesting: maybe you could edit the answer to talk a bit about it? (And at some point I will experiment with an alternative way to do the early abort: the reason I can't do it is that HashSet doesn't support concurrent iteration and modification, but I have ideas to avoid the need for an iterator). \$\endgroup\$ Commented Nov 6, 2014 at 17:31
  • 2
    \$\begingroup\$ @justhalf, using a Gray-esque approach for iterating over the subsets of a fixed size is actually worthwhile. It allowed me to find a 26/14 in under 9 minutes and 34 of them in two hours, at which point I aborted. Currently testing to see whether I can get 28/15 in a reasonable time. \$\endgroup\$ Commented Nov 7, 2014 at 18:00
  • 1
    \$\begingroup\$ @Lembik, I explored 29/15 exhaustively in 75.5 hours. 31/16 would take about 3 times as long - so more than a week. Both of us have made some optimisations since I started running that test of 29/15, so maybe it would be down to a week now. There's nothing stopping you from compiling my code or SuboptimusPrime's code and running it yourself if you have a computer which you can leave on for that long. \$\endgroup\$ Commented Nov 12, 2014 at 10:47