Skip to main content
29 events
when toggle format what by license comment
Apr 13, 2017 at 12:39 history edited CommunityBot
replaced http://codegolf.stackexchange.com/ with https://codegolf.stackexchange.com/
Nov 12, 2014 at 10:47 comment added Peter Taylor @Lembik, I explored 29/15 exhaustively in 75.5 hours. 31/16 would take about 3 times as long - so more than a week. Both of us have made some optimisations since I started running that test of 29/15, so maybe it would be down to a week now. There's nothing stopping you from compiling my code or SuboptimusPrime's code and running it yourself if you have a computer which you can leave on for that long.
Nov 12, 2014 at 5:11 comment added Suboptimus Prime @Lembik Probably not. I've tried to search 31/16 matrices for 10 hours and didn't find anything. Exhaustive search would probably take much longer because even 29/15 didn't finish in 10 hours.
Nov 11, 2014 at 20:55 comment added user9206 Are you able to explore 31/16 and/or 32/16 exhaustively?
Nov 11, 2014 at 19:11 history edited Suboptimus Prime CC BY-SA 3.0
added 3257 characters in body
Nov 9, 2014 at 23:18 comment added Peter Taylor @Lembik, my code takes about 20 seconds to check a 28/15. I'm searching for 29/15, currently at 35 hours with no success. I'm also searching for 30/16, with some additional heuristics to try to speed it up, but no hits yet in just under 6 hours.
Nov 9, 2014 at 20:28 comment added Suboptimus Prime @Lembik I suspect that there is like a billion of 40/20 matrices that do have property X for each one that doesn't, so the probability of finding one is not too high. This is just a guess, but I can't check it anyway since the current version of my program can not work with 40/20 matrices.
Nov 9, 2014 at 20:11 comment added user9206 If each test is quick and the memory problems are ok, it might be worth trying a few thousand random 40/20 matrices where the first row has roughly equal numbers of 1s and 0s.
Nov 9, 2014 at 20:09 comment added Suboptimus Prime @Lembik I did't really check how long it takes to test a single matrix but it's probably just a tiny fraction of a second. But it took over 3 hours to find a 30/16 matrix that doesn't have property X since the number of matrices grows exponentially as number of columns increases.
Nov 9, 2014 at 19:48 comment added user9206 @SuboptimusPrime It be great to find any ratio that contradicts the hypothesis that increasing the number or columns by 2 always increases the number of rows by 1. If the limit of the score really is 2 I feel that a) would be very surprising and b) ought to have a proof. How long does each matrix take to test in the 30/16 case?
Nov 9, 2014 at 19:45 comment added Suboptimus Prime @Lembik I'm pretty sure that 15 rows is optimal for 28 columns although there could be a 29/15 matrix without property X. I intentionally skipped 29/15 and went straight for 30/16 since 29/15 was taking too long.
Nov 9, 2014 at 19:39 history edited Suboptimus Prime CC BY-SA 3.0
added 4914 characters in body
Nov 9, 2014 at 19:36 comment added user9206 30/16 is great! Seems we are creeping to 2.. but can we actually get to it? Do you think 16 is optimal for 30, 15 is optimal for 28 etc.?
Nov 9, 2014 at 19:30 history edited Suboptimus Prime CC BY-SA 3.0
added 4914 characters in body
Nov 9, 2014 at 19:25 history edited Suboptimus Prime CC BY-SA 3.0
added 4914 characters in body
Nov 8, 2014 at 13:09 history edited Suboptimus Prime CC BY-SA 3.0
added 689 characters in body
Nov 7, 2014 at 18:00 comment added Peter Taylor @justhalf, using a Gray-esque approach for iterating over the subsets of a fixed size is actually worthwhile. It allowed me to find a 26/14 in under 9 minutes and 34 of them in two hours, at which point I aborted. Currently testing to see whether I can get 28/15 in a reasonable time.
Nov 7, 2014 at 16:23 comment added Suboptimus Prime @justhalf Btw, the maximum for my algorithm is actually 27 columns, but there are no 27x14 matrices without property X, so I can't improve my score without optimizing the memory usage of my program.
Nov 7, 2014 at 16:16 comment added Suboptimus Prime @justhalf Using Gray code means that you have to iterate through column sets in a different order. Even though each iteration is faster, you end up doing way more iterations before you find two column sets with the same vector sum. I've tried it and it was about 20 times slower than my current algorithm.
Nov 7, 2014 at 7:40 comment added Peter Taylor @justhalf, I've done Grey code, but it doesn't seem to make enough of a difference. Well, overall with some other optimisations I have a 7.5-fold improvement for MAX_N=22, but I think the biggest improvement comes from finding a better way to not use BigInteger.
Nov 7, 2014 at 2:08 comment added justhalf @SuboptimusPrime: Hmm, yeah, the summing based on columns is basically what I do in my answer as well. But you used faster language (C# > Python in speed) and faster algorithm (Peter's > mine). I saw in another thread that you can use Gray Code as the mask to do incremental sum, so you only need to add/remove one vector each iteration. Perhaps you want to implement that? EDIT: ah, but your limit is the memory, so I guess 26 columns is the maximum for your algorithm so far?
Nov 6, 2014 at 18:33 comment added Suboptimus Prime @PeterTaylor Added some explanation to the answer, hopefully it clears things up a bit.
Nov 6, 2014 at 18:32 history edited Suboptimus Prime CC BY-SA 3.0
added 998 characters in body
Nov 6, 2014 at 18:15 history edited Suboptimus Prime CC BY-SA 3.0
added 970 characters in body
Nov 6, 2014 at 17:31 comment added Peter Taylor Although performance changes are often surprising, in principle the early abort shouldn't give more than a factor of 2, and GetSumOfColumns adds an extra loop which I would expect to cost more than that factor of 2. The mask sorting sounds interesting: maybe you could edit the answer to talk a bit about it? (And at some point I will experiment with an alternative way to do the early abort: the reason I can't do it is that HashSet doesn't support concurrent iteration and modification, but I have ideas to avoid the need for an iterator).
Nov 6, 2014 at 17:24 comment added Suboptimus Prime @PeterTaylor The most important optimisation is in HasPropertyX function. The function returns as soon as the first collision of column sums is found (unlike your scoreLyndonWord function). I've also sorted the column masks in such a way that we first check the column sets that are more likely to collide. These two optimisations improved the performance by an order of magnitude.
Nov 6, 2014 at 17:06 comment added Peter Taylor In some regards you seem to have pessimised rather than optimised. The only thing which really looks like an optimisation is allowing bits to clash by using ulong and letting the shift wrap instead of using BigInteger.
Nov 6, 2014 at 16:41 history edited Suboptimus Prime CC BY-SA 3.0
added 30 characters in body
Nov 6, 2014 at 16:09 history answered Suboptimus Prime CC BY-SA 3.0