13 Comments
User's avatar
Utkarsh's avatar

But my problem is that AI often gives wrong answers (yes even now), at which point I have to go looking for documentation/another person. The real problem as always is that these machines don't really think, and so I find them to be of limited utility.

dechichi's avatar

I experienced that too, so I do two things: 1 - either I'm exploring a subject I don't know much about yet, and I treat AI as a preliminary google search (go wide, take notes, then verify), or 2 - I already know what I want to learn, then I find the source material (article, book, source code) and give it to the AI for reference. Of course I read the stuff directly too, the AI is more for asking questions / iterating on the material.

Caleb Bethea's avatar

I'm starting to realize that I often ask the wrong questions, and will be mislead into things I shouldn't be too worried about

Petr's avatar

Totally agree. That is exactly how I mainly use "AI". Feels like a Google on steroids (and then some). As someone said "LLMs are knowledge databases which can be queried by human language". I lost my "fear" of complex topics which I didn't touch because of their complexity and/or steep learning curve (which also results in a "lot of time").

That being said they are potentially dangerous for someone without prior knowledge and understanding on how they work. But for the quick introduction to something that is (relatively) common knowledge they are extremely good in my experience.

It's sad that LLMs are now mainly sold as "giving a man a fish" instead of "teaching a man to fish".

Leo Sutic's avatar

I use it a lot to work my way through perceived contradictions in my mental model of something. Like, "source says A, but it also says B, which I think implies not-A, because blah blah... What's wrong?"

Thiago Pessoa's avatar

I use it as an assistant, to help me when I get stuck. Or sometimes to write minimal implementations of algorithms I want to learn, so I can have something to execute and step through in a debugger.

But you have to be careful when doing that. If you are still just learning, it may be harder to spot when the AI is just hallucinating. That's why I think it's better used as just an extra tool, and you still have to triple check what it says.

Victor Maia Aldecoa's avatar

Right on!

Natique Ibrar Alam's avatar

I rely heavily on AI for learning, the main reason why I was able to digest knowledge which is written in a complicated manner ( which I feel can be explained more easily),

But again, I have not found the sweet spot where I can do the thinking, and AI can just give a helping han,d not a full push to the finish line, which makes me feel like I cheated through an experience which would otherwise be very thought-provoking for me.

PS: I use neovim, btw

dawon's avatar

I've found that relying on AI for learning new concepts needs to be thought out far more than simply opening an existing book on the subject.

It wants to be an assistant, and has to be steered fairly strongly to be a good teacher. Maybe I'm just prompting wrong.

AI has made it harder for me personally to learn new things since it's so eager to just do it for you.

Like all studies, it needs to be done intentionally, otherwise it ends up not being very effective.

You can't outsource experience. At least not yet.

dechichi's avatar

yeah, you have to stay dilligent to ask for friction, since these AIs are trained to please. Mostly I just say "don't do it for me", "I want to learn" and "explain the concepts from first principles".

Dias Culenga Daniel's avatar

For sure I actually use AI as super search engine to get information and references, and eventually to help me understand one concept when I dedicate enough time to understand by myself.

Mikael Ellingson's avatar

In regards to "There is an optimal 'difficulty setting' for learning;" that is a key point. At the optimum between overwhelming and giving-you-the-answer. It is a problem that if the easy answer is just one prompt away, it is very tempting. Human tutors can drop a few bread-crumbs, but I am not sure that today's LLMs can. Interesting to see!

(btw, a term for that optimum is the Zone of Proximal Development)

Reigen Swage's avatar

Definitely it can be useful in occasions, the only good use of AI ChatBots is that they can assist me at times in figuring out how some API function works when I read the docs and didn't make sense to me, or a simple google or stack overflow search didn't help. And when they gave an answer I can continue with follow up questions to clarify things. Later on I can verify the answers by actually trying things myself or manually search to see if the information stands.

I had some good times asking Grok about OpenGL ways I've missed to optimise some old code running on old Pentium 2 with early GeForce gfx card. Very trivial stuff for others and surely I could have found similar information by manual searching but it was a conversational and more direct way. But I never liked asking them to write the code for me. It's no fun for hobby programming but even in serious applications I wouldn't want to pollute the codebase with generate code.

p.s. Also another cool use is conversation around ideas, brainstorming, like "here is the ideas I am thinking about on optimizing rendering, what do you say? Do you have other ideas I might have missed?". But still not asking it to write the code for me.