Apparently ChatGPT can help you write code / programs?

I don't ask computers questions, never have, and I don't see myself doing it ever.

Ah, but you do already if you're a Google user ;)

If you can use it as a tool in your work, then that's great and you should do it. I think we just need to be careful not to integrate them into anything and everything.

I agree, but you know how the tech world is anyway!

Damn, it really can produce something that looks remotely useful right from the start. Wondering
now if i have what it takes to guide it through writing a gui w/lvgl for e-bike display for me. I _hate_
writing GUIs:rolleyes:, but unsure if i hate writing to AI even more:unsure:

Let me know how it goes if you try it!
The most i find it useful for when coding is to generate snippets.
AI is bad at tying lots of things together, which UI does a lot of!
 
Last edited:
Ah, but writing code is only part of what a programmer does for a living. It still needs supervision!
I just said, based on the answers it gave, that it could pass the interview and get a job offer. Still skeptical about the actual programming part, but maybe there's nothing super-machinal in it and can be reduced to mechanical calculation.

Anyway, it is apparently so good that it has scared the shit out of OpenAI - you could tell just by looking at their reactions and chaotic attempts at damage control. But now politicians have caught the wind and are imposing ban on Deepseek, just because it did the same thing as OpenAI (which, what a surprise, is supported by more or less the same politicians). I really hope they get some serious wrist slap this time. Or that, when the Skynet awakens, it will go after the politicians first :)
 
Last edited:
Ah, but you do already if you're a Google user ;)
Then I already have my tool and I don't have any use for a new one.

(Irrelevant, but I am trying to wean myself off Google services and move into self-hosting)
 
Computers aren't things that can answer questions. Computers and software are tools, I see them in the same way I would see a screwdriver, knife or can opener. They're complex and they can do a lot more than a can opener, but at the end of the day, they're dumb tools, and I don't ask my knife the best way to cut an onion.

Computers and software are the tools to access the vast human knowledge base and they are very good at that.

If your questions been answered before, like it probably is for 'the best way to cut an onion' it's in this vast knowledgebase.

I feel pretty safe getting answers from this knowledgebase that way.

I feel a lot less confident if you're asking current AI to come up with answers on it's own, for questions which haven't been answered or even asked before. This is when their tendency to hallucinate becomes something you can't verify source material for.
 
Ah, but you do already if you're a Google user ;)



I agree, but you know how the tech world is anyway!



Let me know how it goes if you try it!
The most i find it useful for when coding is to generate snippets.
AI is bad at tying lots of things together, which UI does a lot of!
And yet, let it never be said that I don't try new things. Putting it on my to-do list today to play around with some models, maybe I'll see what the fuss is about.
 
I feel a lot less confident if you're asking current AI to come up with answers on it's own, for questions which haven't been answered or even asked before. This is when their tendency to hallucinate becomes something you can't verify source material for.

This is why i only use LLMs that can cite sources. Everything needs to be confirmed. Especially when we're talking about uncommon knowledge.

Deepseek hallucinates things the least for me though, and the card up it's sleeve is that it often finds me a result that was probably on page 10 in Google.. or in a foreign language source ( which i can translate ).

I use deepseek as a first pass for research now, then follow up with independent websites or google to confirm.

it's been unable to unearth things i couldn't find on google, and get me to an ideal answer quicker. That's the value for me.
 
This is why i only use LLMs that can cite sources. Everything needs to be confirmed. Especially when we're talking about uncommon knowledge.

Deepseek hallucinates things the least for me though, and the card up it's sleeve is that it often finds me a result that was probably on page 10 in Google.. or in a foreign language source ( which i can translate ).

I use deepseek as a first pass for research now, then follow up with independent websites or google to confirm.

it's been unable to unearth things i couldn't find on google, and get me to an ideal answer quicker. That's the value for me.

Can you share an example of a prompt you gave it, and its useful response it gave you?
 
Yeah, i was researching this tool for a client.. no there LLM seemed to know what it was.

Prompt: can wikipedia's excimer profiler produce function arguments in it's trace output?

1738634212812.png

I verified all of this with a read through of documentation that took about 10x longer than asking this question

Highlighted is the one part of the answer that's wrong ( profilers don't do this because it's CPU-expensive ), the rest of the answer is totally correct, and the recommendation for xdebug as a way to do what i want is correct, too.

Wikimedia's excimer debugger is extremely niche, never listed in any list of profilers for PHP, so it's pretty impressive that it understood it so well.
 
How was the rest of the answer?
Please post result!
 
Last edited:
Self hosting seems far less something worth while for normal people, I mean he couldn't run the 'smart' model even with his setup ( let alone someone without a threadripper setup ).


edit: not to mention I wasn't even aware DS has a 'reasoning model' instead of 'predicting the next most used word'.

That is a step above, interesting. I will stop using Bing ;)
 
Last edited:
Self hosting seems far less something worth while for normal people, I mean he couldn't run the 'smart' model even with his setup ( let alone someone without a threadripper setup ).


edit: not to mention I wasn't even aware DS has a 'reasoning model' instead of 'predicting the next most used word'.

That is a step above, interesting. I will stop using Bing ;)

Yes. Minimum cost of entry to a decent LLM will set you back about $4k USD ( Used Nvidia workstation card with 48GB ).
CPU is extremely slow, forget it, this software was developed for and on Nvidia GPUs and performs best on them.

I currently run Qwen2.5-coder 14B ( Consumes ~8gb in GPU ) via ollama on a Nvidia 4070 and it's extremely snappy, but doesn't have broad knowledge. I have to hand-feed it small bits of context, which is tedious, but sometimes works out in my favor.
To get to deepseek's level of knowledge broadness and reasoning, you need around 480GB of VRAM, so it's about a 5 figure investment to get running. And the thing probably draws 1kW for a few seconds when answering a prompt.

So.. yeah... :ROFLMAO:

The problem is that as the computer gets smarter and has more knowledge, it takes exponentially more processing power and memory. The human brain runs on ~100w of power. The overgrown calculator can run faster than the human brain, but needs thousands of watts of instantaneous draw to perform the task.

In theory, chips/hardware/software optimization might be able to get a human equivalent computer brain to operate on only 2x-5x the electricity; at this point it's feasible to convert electricity into thought.

Of course we have to solve the problem of where to get 2-5x the electricity we currently have w/o cooking the planet 😅

The advantage to a LLM is that you can use it as a side-arm brain which can be working out hard problems for you in the background. For me, i leverage it when reasonable, and it expands the amount of things i can do a bit, which is awesome.

It would be a second generation bicycle for the mind, making every human so much more effective, and that's pretty cool from a technological perspective.

1738694953129.png

I would say the technology has a way to go but the current state of the art is starting to get impressive.
 
I haven't tried DS but the more I read the more I am impressed really. All I wanted was this 'reasoning model' instead of just feeding answers based on word prediction which is basically what all the others do.

I mentioned cpu because of the ram requirements, even if it would be dog slow, you would sooner be able to run the large model in a server setup with 480gb of ram, compared with needing the actual vram to run it on a gpu cluster.
I currently run Qwen2.5-coder 14B ( Consumes ~8gb in GPU )

If that model only uses 8gb, could it run on a 3060 12gb or do you need a 4xxx card? Because google/bing told me I 'need' a 4090 to run it... then it tells me it 'needs 9gb to run optimally' so when I ask if I really needed a 4090 is suddenly said I should get a 3060 😂
 
A 3060 would work ( My 4070 has 12gb )., it just won't be as quick.

Steps to install:
download ollama
open up your command prompt and type ollama; it will give you good instructions on how to use it.
browse the library for a list of what you can install.

AFAIK Qwen2.5-14B is where it's at for the best performance to accuracy/usefulness ratio in this GPU size ( very constrained )
 
How was the rest of the answer?
Please post result!
I don't have the rest of the answer it gave me, it was overly simplistic but acceptable. I was just following your advice, and asking a complex question, but my point was that I lost interest after it failed following its logic within the sentence. At hundreds of watts too, as you pointed it.
 
Steps to install:

...Ow no, I said

I should get

for a reason lol, I'm one of those people who keep on running old hw till it's incapable of doing anything. Still on Rx580, sadly no Cuda... maybe I got a 1650 somewhere but that doesn't help me much.

in this GPU size ( very constrained )

I am thinking, is there like a 'local assistant' type of llm which can integrate with calendar services ect ect? Having a personal assistant which doesn't run on some company server sounds very appealing. And if so, how large would it be? Would the generic
'chatbot / support' llm be viable as a personal assistant ( I always find those bots terribly limited, but they never tell you what they are based on )? And how hard would it be to write some kind of plugin to allow the llm to interact with other services?

Like with your code llm, could you integrate it into your workflow with automation: like tell it to 'update your github' and have it upload pending changes and generate changelog automatically based on annotations in your code?
 
That's to run the whole average human body. The brain itself is maybe 20% of that.

Resting whole average body, the brain using 20% of atp energy in rest while having 2% off body mass is still impressive.
 
That's to run the whole average human body. The brain itself is maybe 20% of that.

Okay, bigger than a big rift!

I am thinking, is there like a 'local assistant' type of llm which can integrate with calendar services ect ect? Having a personal assistant which doesn't run on some company server sounds very appealing. And if so, how large would it be? Would the generic
'chatbot / support' llm be viable as a personal assistant ( I always find those bots terribly limited, but they never tell you what they are based on )? And how hard would it be to write some kind of plugin to allow the llm to interact with other services?

Like with your code llm, could you integrate it into your workflow with automation: like tell it to 'update your github' and have it upload pending changes and generate changelog automatically based on annotations in your code?

That would be called an 'agent'... that's currently kinda bleeding edge but there are multiple ways to do what you're talking about.
I have no idea how well those things work but i've been contemplating find a way to try.
My question is.... how accurate is it?
People expect me to be 98% accurate, i don't think 80% accurate is good enough!
 
My question is.... how accurate is it?
People expect me to be 98% accurate, i don't think 80% accurate is good enough!

Shouldn't be hard to set up automation using your own annotations, that's just parsing and not logic.

But would the llm be capable of generating annotations on it's own? Generate a function description including expected input/output ect.

If you can get it to generate code based on a text input, how well can it translate a code snipet to human readable annotation?

And should this process be 100% reversible, where it generates the same code output based on an annotation, as it generates an annotation based on a code snippet? If it's not, but the answers are still correct, does this make the answer worth less ( because there is a variance, even if the intrinsic meaning stays the same ).
 
This is hilarious...

especially one off the comments saying that it's most likely that in a few months they will sounds like fax machines.
 
Shouldn't be hard to set up automation using your own annotations, that's just parsing and not logic.

Yeah, but an agent is specifically able to do things like operate a web browser or some other software, it translates english typed commands into actions

If you can get it to generate code based on a text input, how well can it translate a code snipet to human readable annotation?

I've found that even the humble Qwen2.5-14B can summarize my ~1000 line code files pretty well, if i seperate it into chunks.
Not sure about deepseek, i use that as more of a search engine.

And should this process be 100% reversible, where it generates the same code output based on an annotation, as it generates an annotation based on a code snippet? If it's not, but the answers are still correct, does this make the answer worth less ( because there is a variance, even if the intrinsic meaning stays the same ).

I think if you ran a LLM at a neutral 'temperature', which is basically a creativity slider, you might be able to get reproduceable results, but by default, they are usually set to be a bit creative
 
Last edited:
A 3060 would work ( My 4070 has 12gb )., it just won't be as quick.

the best one i've got laying around "PNY NVIDIA GeForce GTX 560 1 GB Enthusaiast Edition" should be super-fast then right? :lol: :roll:
An AI "running on that" probably still thinks faster than I do.... (if I used it in my ancient HP server it would probably require a separate external PSU just for it)
 
Way too little ram on that gpu! ( and probably no CUDA acceleration! )
 
Back
Top