Skip to main content
added 442 characters in body
Source Link

A challenge needs an objective winning criterion

If I want to test an answer, I can wire up the same arrangement but may get different results. If however a specific digital simulator is required by the challenge, then answers can be objectively tested, with everyone agreeing on the result.

I see no problem with provided that each challenge specifies a freely available digital testing environment. A free trial period is not sufficient to qualify as "freely available". For example, challenges may be posted more than that period apart.

Why this needs to be picky

This may seem overly strict at first glance, as most breadboard circuits are (naturally) designed to work consistently when constructed by more than one person. This makes them nearer to being digital devices, which will tend to be consistent with the behaviour of a digital simulator.

The reason that we need to be more strict here is that golfing is always going to push towards the limits of what is possible, so answers will no longer be carefully avoiding analogue behaviour, and may even depend on it. It seems unlikely we could find an objective way to draw a line between what is digital enough to be a valid answer, and what is chaotic enough to be non-reproducible and an invalid answer.

The simple solution is to use a digital simulator, which will allow for all the digital approaches a challenge writer is hoping for, without the analogue problems that golfing is likely to push towards.

Picking an appropriate simulator

When writing a challenge, consider what types of answer you would like to see. You may be able to find (or write) digital simulators that simulate noise and analogue effects to different degrees. Provided each allows for objective testing (same results on different machines), all are valid choices, so just specify the one that has the range of behaviour within which you wish to see people compete.

A challenge needs an objective winning criterion

If I want to test an answer, I can wire up the same arrangement but may get different results. If however a specific digital simulator is required by the challenge, then answers can be objectively tested, with everyone agreeing on the result.

I see no problem with provided that each challenge specifies a freely available digital testing environment. A free trial period is not sufficient to qualify as "freely available". For example, challenges may be posted more than that period apart.

Why this needs to be picky

This may seem overly strict at first glance, as most breadboard circuits are (naturally) designed to work consistently when constructed by more than one person. This makes them nearer to being digital devices, which will tend to be consistent with the behaviour of a digital simulator.

The reason that we need to be more strict here is that golfing is always going to push towards the limits of what is possible, so answers will no longer be carefully avoiding analogue behaviour, and may even depend on it. It seems unlikely we could find an objective way to draw a line between what is digital enough to be a valid answer, and what is chaotic enough to be non-reproducible and an invalid answer.

The simple solution is to use a digital simulator, which will allow for all the digital approaches a challenge writer is hoping for, without the analogue problems that golfing is likely to push towards.

A challenge needs an objective winning criterion

If I want to test an answer, I can wire up the same arrangement but may get different results. If however a specific digital simulator is required by the challenge, then answers can be objectively tested, with everyone agreeing on the result.

I see no problem with provided that each challenge specifies a freely available digital testing environment. A free trial period is not sufficient to qualify as "freely available". For example, challenges may be posted more than that period apart.

Why this needs to be picky

This may seem overly strict at first glance, as most breadboard circuits are (naturally) designed to work consistently when constructed by more than one person. This makes them nearer to being digital devices, which will tend to be consistent with the behaviour of a digital simulator.

The reason that we need to be more strict here is that golfing is always going to push towards the limits of what is possible, so answers will no longer be carefully avoiding analogue behaviour, and may even depend on it. It seems unlikely we could find an objective way to draw a line between what is digital enough to be a valid answer, and what is chaotic enough to be non-reproducible and an invalid answer.

The simple solution is to use a digital simulator, which will allow for all the digital approaches a challenge writer is hoping for, without the analogue problems that golfing is likely to push towards.

Picking an appropriate simulator

When writing a challenge, consider what types of answer you would like to see. You may be able to find (or write) digital simulators that simulate noise and analogue effects to different degrees. Provided each allows for objective testing (same results on different machines), all are valid choices, so just specify the one that has the range of behaviour within which you wish to see people compete.

Source Link

A challenge needs an objective winning criterion

If I want to test an answer, I can wire up the same arrangement but may get different results. If however a specific digital simulator is required by the challenge, then answers can be objectively tested, with everyone agreeing on the result.

I see no problem with provided that each challenge specifies a freely available digital testing environment. A free trial period is not sufficient to qualify as "freely available". For example, challenges may be posted more than that period apart.

Why this needs to be picky

This may seem overly strict at first glance, as most breadboard circuits are (naturally) designed to work consistently when constructed by more than one person. This makes them nearer to being digital devices, which will tend to be consistent with the behaviour of a digital simulator.

The reason that we need to be more strict here is that golfing is always going to push towards the limits of what is possible, so answers will no longer be carefully avoiding analogue behaviour, and may even depend on it. It seems unlikely we could find an objective way to draw a line between what is digital enough to be a valid answer, and what is chaotic enough to be non-reproducible and an invalid answer.

The simple solution is to use a digital simulator, which will allow for all the digital approaches a challenge writer is hoping for, without the analogue problems that golfing is likely to push towards.