Skip to main content
34 events
when toggle format what by license comment
Apr 10 at 16:05 vote accept mkinson
Apr 3 at 18:55 answer added Peter - Reinstate Monica timeline score: 0
Apr 3 at 18:54 history closed Jo Wehler
David Gudeman
Hokon
mkinson
Philip Klöcking
Duplicate of Does the use of AI make someone more intelligent?
Apr 3 at 18:12 answer added cjs timeline score: 0
Apr 3 at 16:34 answer added Speakpigeon timeline score: 0
Apr 3 at 16:04 history edited mkinson CC BY-SA 4.0
added 770 characters in body
Apr 3 at 15:45 answer added Steven Armstrong timeline score: 2
Apr 3 at 14:29 answer added SonOfThought timeline score: 2
Apr 3 at 12:50 comment added J D @haxor789 Smarter entities always adapt to less intelligent entities. But that doesn't mean less intelligent entities don't understand something or exhibit some intelligence.
Apr 3 at 9:53 comment added haxor789 Which puts AI in a weird middle ground, it's sorta kinda like a human that thinks for itself and evaluates your commands rather than execute them, while at the same time it is not a human and doesn't think for itself. But also what it does isn't deterministic anymore but adapting so who is learning who's language, is AI a mirror so is it teaching introspection, do you have to guess what the developers or the average guy/girl after whom that is modeled is like? Is it some kind of religion where you don't know how it works but just have faith in it doing so?
Apr 3 at 9:37 comment added haxor789 @JD low-code or no-code is a radically different kind of programming. Like if you ever worked with people you might develop the idea of "if they wouldn't think so damn much but just do what I tell them everything would be fine". While if you ever worked with a machine that does exactly as you tell them you'll find out pretty quickly that there is a discrepancy between what you say and what you mean and it's not the machine that adapts to you (it can't), but you are the one who needs to adapt to the machine, to rephrase your commands in their language so that they can understand them.
Apr 3 at 8:14 answer added AnoE timeline score: 1
Apr 3 at 6:40 comment added Dikran Marsupial @JD The current direction of LLMs won't support that direction because they have no understanding of the code they produce. It will erode the end of the job market where deep understanding of the code is not required (aspects of UI implementation), but it is unlikely to make significant inroads into e.g. security. The problem is how to develop that understanding in an environment that makes solving simple programming tasks trivial.
Apr 3 at 6:03 comment added Toffomat Obligatory xkcd: xkcd.com/1289
Apr 3 at 0:49 history edited Mark Andrews CC BY-SA 4.0
Formatting
Apr 3 at 0:22 comment added Scott Rowe My perception is that this is a branch point, a watershed where most people go off in a new direction. In the past 100+ years, many many things have stopped being learned, and it doesn't affect most people at all because technology has papered over it. The problems with this are the cases where depth knowledge is required, when an unexpected problem arises, or there is a widespread failure, war conditions or something. Someone still needs to know, but far fewer people. Not much horse riding these days, or woodworking. But most people couldn't even diagnose a problem with their car.
Apr 2 at 19:32 history became hot network question
Apr 2 at 19:02 comment added J D @DikranMarsupial While I appreciate your point, I would remember that complex systems are seldom dominated by a single variable. As technology becomes more sophisticated, it will impact the system in ways that might not be foreseeable. Perhaps a tipping point will be reached in social institutions where accountability radically advances to hold students more accountable in the future by way of a political movement. Perhaps ChatGPT will enable everyone to become programmers by moving us towards a lo-code/no-code future. Imagine if LCARS was the norm. ; )
Apr 2 at 17:15 comment added Dikran Marsupial @JD it doesn't really matter if it is a failure of technology or of human nature, the question is whether it will make us less able to think, and in this case, the answer is probably "yes" and we will end up with programmers out of a job and a world full of bad (that wasn't my first choice of word ;o) code because ChatGPT doesn't understand it either.
Apr 2 at 16:49 comment added J D @DikranMarsupial That's not a failure of technology. It's a failure of human accountability.
Apr 2 at 16:13 answer added ac15 timeline score: 5
Apr 2 at 15:24 comment added TKoL I'd be willing to bet some small subset of artists who have started using AI image generation software have already had their ability to generate new visual ideas negatively impacted by these tools.
Apr 2 at 15:18 comment added Dikran Marsupial @ScottRowe me too! There is a compromise in the middle somewhere ;o) The problem is that it is human nature to take the easy route, it is hard to ignore help for a deferred benefit when the assignment is due next week. There is an optimal work-benefit compromise, and I suspect that learning to program with an LLM is not it. Great tool for later on though.
Apr 2 at 14:59 comment added Scott Rowe @DikranMarsupial right, when I learned I didn't have a teacher or even any materials mostly. I just stared at the terminal and tried to make some fairly simple programs.
Apr 2 at 14:51 comment added Dikran Marsupial @ScottRowe re. learning to program, it can be very difficult to learn something if your teacher is too helpful. Over the last 25 years I have seen undergraduate students problem solving skills weaken as the amount of tutorial material (and SE) solves most of the kinds of problems that are good for teaching. It is like going to the gym and watching someone else pump iron - you may learn things, but it wont make your muscles any bigger. c.f. cseducators.stackexchange.com/questions/7339/…
Apr 2 at 14:42 history edited mkinson CC BY-SA 4.0
added 794 characters in body
Apr 2 at 14:01 comment added Scott Rowe The other day I was thinking that driving a horse-drawn wagon might have been a more difficult task than an automatic transmission car. Carving might be harder than 3D printing. Recent programming methods might be easier than what I learned on. Or, maybe not, we would need some research to determine these answers. I know that the way I learned to program is forgotten and never involved in learning now. AI is a similar thing, and might help us skim past a lot of details yet have to cognize more abstract things. A long way of saying, Hmm...
Apr 2 at 12:28 comment added J D Are there precedents? Yes, the printing press. It automated the production of written language and the world largely became literate.
Apr 2 at 12:27 history edited J D
edited tags
Apr 2 at 12:26 answer added J D timeline score: 11
Apr 2 at 12:15 review Close votes
Apr 3 at 18:55
Apr 2 at 12:14 comment added Jo Wehler Asking AI does not necessarily reduce our capability of critical questioning. We need critical thinking to question the answers we get from AI. So it's up to ourselves how we use AI.
Apr 2 at 11:59 comment added mkinson @JoWehler It appears to be a start, and thank you for finding the post. However, I'm more concerned that the reliance on AI to answer our questions might stop us from thinking critically. I bring up the renaissance and the Age of Enlightenment because we were forced to think to answer the questions we were asking. If we're just handed answers, will that not stunt our capacity for contemplation of such questions?
Apr 2 at 11:31 history asked mkinson CC BY-SA 4.0