Apparently ChatGPT can help you write code / programs?

I do see this a the democratisation/unleashing of lawlessness in software as you now can create fairly complex programming with absolutely zero programming knowledge
After implementing some, and later getting a corrupt file, got me wondering how careless some of the AI-coding might be and even though I am still using some useful generated code, I'm on hold therewise until I give it more thought on backing up early and often before implementing mystery-generated magic code, when my data is my bread and butter or broke.
 
it can't do math, and lies :whistle:
 
After implementing some, and later getting a corrupt file, got me wondering how careless some of the AI-coding might be and even though I am still using some useful generated code, I'm on hold therewise until I give it more thought on backing up early and often before implementing mystery-generated magic code, when my data is my bread and butter or broke.

I think i said this up higher in the thread but if you are not at least mentally executing or thinking out the code and how it will work, you are going to end up with huge problems later down the line.

The problem is, taking this time to vet the code severely reduces the productivity you can gain from it.

Here's what i got when i asked it to convert a color palette from RGB hex code to HSL.
Left: using a program called 'gpick' on linux to convert the colors ( and i added another color ).
Right: ChatGPT output.

The results are bizarre. The toning is similar, but not identical. The hue and saturation are completely wrong.
It understood it needed to output CSS, but forgot the semicolon at the end.

ChatGPT Fuckup.png

In this case, i could visually see things were wrong immediately because i'm using a jetbrains IDE that shows CSS colors.

...not so easy when it gives you garbage code!

I suggest that you leave a comment in your code for anything generated by these kinds of technologies that states it was generated by ( Insert LLM here ). That way, you know what code is suspect later down the line :mrgreen:
 
Yes...

And backup early, backup often, and don't forget to backup, is always something to keep in mind for any data that is important to you.

I also highly recommend using a versioning system, consistently, for all data files of any kind.

And always use "save as", never "save", so you never ever write over a saved file. In any program that allows user-modified keybindings, for instance, I always change Ctrl-S to SaveAs instead of Save. (there are also key macro programs you can run in the OS to capture and alter key sequences to do that for you if you can't do it in a program). Could save your life someday when you accidentally overwrite the working version of something with a trashed / buggy / whatever version. ;)

All of these things have saved thousands of hours of my work in various creative efforts, mostly in my music creation (all the way back to floppy-disk-only machines).
 
Ive had a little play with it with some SQL code I wrote to intentionally fail to see if it could suggest a fix. It was nothing special just a data type mismatch but did suggest the correct answer. Another use I've found was to explain a powershell script to me as I wasn't sure what it was doing.
 
I got to be honest - it’s stuff like this that makes me glad I am at the end of my career rather than the start of it. I am a Wintel System admin looking after about 100 servers with a team of two. At the moment things are quite safe but over the years I have seen workload decrease as hardware and software gets better. I only have 5 years left to retirement (maybe a couple more than that if I want to work for play money) and I can see the way it is heading. I think it will be a case of OSs are to a large degree self healing - self patching - self whatever I do to them. There will always need to be someone but they will not have as much to do, or more likely will be fewer staff and looking after more servers.

I kinda have been through this once before. My Y2K project was working in the middle of the range (sort of) IBM mainframes. During the run up AS400s (small systems) were getting more powerful and moving up in what they could do. IBM then reduced the minimum spec needed for MVS (their big boy OS) and offered really good deals … “as you have to recompile/re-write your code any way”. The result was that after Y2K the middle ground (VM/VSE) pretty much disappeared. I found myself in a situation where 15 -16 years worth of experience was pretty much worthless.I don’t fancy it again.

I figure I’ll see my 5 years out. That will get my pension and me debt free. I can work a couple of years and my pay will be totally disposable so a couple of years of that will have a nice play fund. Then call it a day. Spend time playing VR games - watching movies - reading and messing about on my e-trike. Got to wonder where battery/motor technology will be by that time.
 
Chat GTP can now search the internet too. before it was before 2021 only.
 
Lately i have been preferring using phind.com over ChatGPT because there is now a significant disparity in the quality of answers given, ChatGPT seems to regularly provide worse answers now.

I have 15 years of programming experience so i still don't find AI very useful because the answers are at, or below my level of understanding, but rarely above.

Some academic research i've been reading shows that useres with a low skill in the topic they are asking about can certainly benefit from AI, but very skilled people see little to no benefit from it. That's mostly been the case for me.

We do a have a member of the ES Dev team who has really slow typing speed and he uses it to keep up with the rest of the team. most certainly the benefit to him is massive.

I think what AI is is an equalizer and there is no replacement for highly skilled programmers and also probably not technicians.
So if you want to stay relevant in the future, start leveling up those skills.


I find phind to be useful mostly in arenas that i don't know ( example: frontend javascript, which i do little of ), or for having it do a first pass of research for me, asking a question like:

How do i, in apache, use an setEnvIf expression to exclude an access with a query variable?
^-- this is a mega specific question that would take pages of reading to come to an answer in google. Phind gave me a correct answer in the second try, which was great. That saved me about 10-30 minutes of reading obscure documentation points and stack overflow posts.
 
One Achilles heel of AI is that it is computationally *wildly* expensive at the moment, and therefore a mass expansion of it's usage will be at odds with carbon emissions goals in western nations. The west could also potentially lose Taiwan as a chipmaker which would also monkey wrench it's advancement.

I say if it can enhance your job, then use it and follow the stream. Survival of the fittest is all about the ability to adapt to new conditions, and change is the only constant in tech..
 
Update on this.

All LLMs still suck for complex use coding cases and i can't get any of them to do things like write a functioning asynchronous MySQL function for PHP. In that case, what worked was googling and finding an 11 year old gist on PHP, which should have been in the AI's training data. I'm still not using them for code assistance and find them disappointing.

But I've been experimenting with locally hosted open source AI, and it's starting to get good for some uses.
These days i can pipe the latest models into my business machine's IDE with a plugin and run the LLM on my gaming rig ( Nvidia 4070 ).

Qwen2.5 and llama3.2 are both pretty impressive and seem better than chatGPT4. Both run as snappy as commercial online services.

Either can create a fairly accurate first draft of documentation on complex code that's written in an uncommon dialect of PHP.

Gfli_S8a8AEE0VS.jpeg

Open source LLMs might be good enough soon to power an effective search function for Endless Sphere in a way that doesn't sacrifice the site or any of it's user's privacy. This would be added to the list of search functions instead of replacing the existing xenforo & google search options.

Hurdles that'd need clearing
- training must be kept up to date and we think that's probably too expensive
- the output of the search results needs to cite the original poster; we must credit people
- the electricity cost of a Nvidia 5090 or some other large GPU might be very high; it would need a results caching mechanism like commercial LLM services have to be even remotely cost effective.
- curating data to provide high quality answers will require bespoke solutions and possibly too much human labor
- the bang/buck on providing this service is still probably too low

If you know things about creating customized models for small scale situations like this, i'd like to pick your brain!
 
Last edited:
Update on this.

All LLMs still suck for complex use coding cases and i can't get any of them to do things like write a functioning asynchronous MySQL function for PHP. In that case, what worked was googling and finding an 11 year old gist on PHP, which should have been in the AI's training data. I'm still not using them for code assistance and find them disappointing.

But I've been experimenting with locally hosted open source AI, and it's starting to get good for some uses.
These days i can pipe the latest models into my business machine's IDE with a plugin and run the LLM on my gaming rig ( Nvidia 4070 ).

Qwen2.5 and llama3.2 are both pretty impressive and seem better than chatGPT4. Both run as snappy as commercial online services.

Either can create a fairly accurate first draft of documentation on complex code that's written in an uncommon dialect of PHP.

View attachment 363421

Open source LLMs might be good enough soon to power an effective search function for Endless Sphere in a way that doesn't sacrifice the site or any of it's user's privacy. This would be added to the list of search functions instead of replacing the existing xenforo & google search options.

Hurdles that'd need clearing
- training must be kept up to date and we think that's probably too expensive
- the output of the search results needs to cite the original poster; we must credit people
- the electricity cost of a Nvidia 5090 or some other large GPU might be very high; it would need a results caching mechanism like commercial LLM services have to be even remotely cost effective.
- curating data to provide high quality answers will require bespoke solutions and possibly too much human labor
- the bang/buck on providing this service is still probably too low

If you know things about creating customized models for small scale situations like this, i'd like to pick your brain!
What's the specs on your gaming rig that is "as snappy as commercial online services"? (besides nvidia 4070) Are you tracking power usage at all during requests?

I saw this earlier and found it interesting for those wanting low-power, simple LLMs to play around with at home, while keeping power draw low:
 
I'm not tracking power costs, but spoiler alert: they're bad 😅

Snappy as in, you can get a good quality multi paragraph output in a few seconds. Especially with Meta's LLama 3.x.. that thing is the fastest of them all. Qwen2.5-14B produces a bit nicer responses, but is ~25% slower.

Using ollama with any open source AI does put a large load on the CPU before it slams the GPU; i think a raspberry pi might struggle with this. Both my work box and my home gaming rig have Intel 10700's ( 8 core, 4.9ghz peak ).

I would guess the GPU makes up >75% of the power usage.
 
Nvidia's 5090 looks like it's $1999 and has 32gb of ram.
Apparently has 2x the power of a 4090. Something like 3x the power of my 4070.
This would be enough power for someone to run a LLM for 1-3 people. That's pretty awesome.

Open source AI models are getting scary good as of late, close to competitive with commercial options.

Had some fun with the 4070 the other day. You can download the next generation of stable diffusion ( flux ) and run it via the 'invoke' software pretty easily. Image generation is kinda slow, but the output is pretty amazing. We're mostly past the multiple heads and 6 fingers era of AI generation.

Gf6kDMUawAAc0f2.jpg
 
Last edited:
Checking in.

I used Deepseek R1 for the first time two days ago and had my mind blown at the quality of answers i was getting.
Until it went down from DDOS or over-use, i used it to:

- Ask very specific questions about a specialty profiler that wikipedia developed ( most LLMs don't even know what this is, including the expensive American ones )
- Assist me into understanding what good locking code looks like in database engines and how to adapt that to PHP
- Generate example xdebug trace files so i could test profiler visualizers
- Finding me good research papers about concurrency handling and how the fastest database engines do it in their code

I double checked it's answers and they were right >90% of the time.
I haven't used this to generate any code yet, but it sped up my research easily 3x.. and i love that it cites it's sources, so that i can sanity check the output.

This experience of having AI become helpful at the high skill level i'm at really blew me away.
I now have to hand in my AI skeptic card. AI is capable of assisting with high skilled tasks at this point!
 
Yep, consider me AI-skeptical, but after Deepseek was introduced i asked it few general software design questions, like specific problems to solve - and the answers were pretty good. Didn't ask it to do coding, just to explain the approach it would take to solve particular problem, similar to a question you might get at a job interview. And the answers seemed complete, consistent, well thought out and actually made sense. It could get a software dev job somewhere, for sure.
 
Ok, I'll bite.

Why? I'm just dipping my toe into this world of coding and software. But when I'm having an issue and trying to solve something, I'll google. I'll go to stack exchange. I'll quickly browse reddit, or one of the other dozen forums that specializes in the problem I;m trying to solve. The answers are already there, on the internet. Then, when I've solved it and tested my solutions, I'm now that much smarter and more experienced.

There isn't anything I can think of wanting to know that I would ask a chatbot. There's no way to know if the answers you're getting are genuine, and they blow through so much electricity and resources. I've been following the progression and hype the past couple years, and I just don't get why there's so much excitement and investment, to get a piece of overblown software that's accurate 90% of the time, at best.

Rant over. I'm the skeptic. Someone argue with me please, I just don't get it.
 
I think weirdly were they can be useful at coding is on the far ends of the spectrum. If you have no idea what you are doing, literally zero knowledge it may be much easier to learn using them because it's like having a teacher that can explain anything you are unsure of. For this basic stuff it's probably way more than 90% correct because it's all so basic and you can ask it things knowing none of the nomenclature. Now you do have to be trying to learn and not expecting it to just do it for you but may be more effective than courses.

On the other end if you are an experienced and skilled programmer you'll know what it generates is correct and you can just have it remind you of things and look up things much faster than you can. You aren't just having it generate a whole program and sending it but it can make you much faster. This is why so many companies are providing LLMs to their programmers.

Also the hype is not even so much around how good they are now, it's technology and the only thing that matters is the rate of change and it's currently real fast.
 
Yep, consider me AI-skeptical, but after Deepseek was introduced i asked it few general software design questions, like specific problems to solve - and the answers were pretty good.

It's amazing for research isn't it. It's like the search engine Google would have made if they didn't become a fat monopolist and kept going innovating search.

It could get a software dev job somewhere, for sure.

Ah, but writing code is only part of what a programmer does for a living. It still needs supervision!
It could speed up a programmer in a number of areas though!
 
There isn't anything I can think of wanting to know that I would ask a chatbot. There's no way to know if the answers you're getting are genuine, and they blow through so much electricity and resources. I've been following the progression and hype the past couple years, and I just don't get why there's so much excitement and investment, to get a piece of overblown software that's accurate 90% of the time, at best.

Rant over. I'm the skeptic. Someone argue with me please, I just don't get it.
There are cases like mine with the wolfy project, where I don't "understand" (not the right word but I don't know how to express it) coding, have a terrible time learning it, and am so frustrated by it that I can't focus on it, and since I can't hire them can't get anyone, anywhere, interested enough to fully read and understand what I have to tell them about what's needed, do the coding or help me one-on-one in detail for a long time while I learn how, and then keep helping while I figure it out.**** The bot doesn't care, and just does what it's asked, so as long as it's actually doing it correctly and completely (bots are probably not there yet), then it's probably the "only" way to get what I need done, done, in any reasonable amount of time. ;)

I don't know how many other people are out there in my situation, but I'm sure there are some.


There are probably also people that don't have time to do some or all of whatever it is they're having the bot do, and either don't have $$ to pay a human to do it or don't want to spend it. (this, other than the last few words, also applies to me for the song/album artwork I use on my music--if I used my time to make the artwork, I wouldnt' have any to make the music it is for...if I was independently wealthy without having to work for a living, I could do it all myself, but there just isn't time).


Mostly I expect it is useful for people who don't *want* to do any of the work themselves, like most of the newbies that come here for "help" with whatever problem they have. ;)




****I['ve had a number of people help with bits of hte project, for which I'm very thankful, and it's helped along in fits and starts by that, but it's not enough to get it finished in one go...
 
Why? I'm just dipping my toe into this world of coding and software. But when I'm having an issue and trying to solve something, I'll google. I'll go to stack exchange. I'll quickly browse reddit, or one of the other dozen forums that specializes in the problem I;m trying to solve. The answers are already there, on the internet. Then, when I've solved it and tested my solutions, I'm now that much smarter and more experienced.

I do that daily, but.. there's a limit to how complex of a question you can ask google. Usually a big brain ask will require multiple search rounds on google, because google searches need to return webpages, that help answer the question, instead of providing a single bespoke answer.

But with a really smart LLM like Deepseek or maybe OpenAI's o3-mini, you can ask it the broader question, leave out specifics you'd normally ask google, and more other than not get a good answer.

What's happening inside these newer LLM models is reasoning capability. It can break your big ask into smaller pieces, read between the lines, and consider each aspect while answering the question. The human now doesn't have to hand-feed small context parts of the overall question. You just give it the big ask!

There isn't anything I can think of wanting to know that I would ask a chatbot. There's no way to know if the answers you're getting are genuine, and they blow through so much electricity and resources. I've been following the progression and hype the past couple years, and I just don't get why there's so much excitement and investment, to get a piece of overblown software that's accurate 90% of the time, at best.

Ahh... actually with Deepseek r1, it will give you sources if you use the 'search the web' option.
This makes it fast to confirm, and that's something i used to appreciate about phind.
I always confirm things, but.. since i have experience in the realm i'm searching, i'm fast at vetting.
I find Deepseek R1's accuracy to be pretty high, even when asking hyper-niche questions.
In the case that's it's wrong, i can steer it towards giving me the right answer by giving it more specifics, explaining why it's wrong, etc.. and get to the answer in a few tries!

..this is unlike previous LLMs which would regularly send you in circles.
Rant over. I'm the skeptic. Someone argue with me please, I just don't get it.

Just try R1 and ask it the hardest questions you've ever asked a computer!
They don't need a phone number to sign up, just a gmail account will do.

And since the website is in China.. i'd strongly recommend not feeding it any proprietary/sensitive data!
 
Last edited:
Just try R1 and ask it the hardest questions you've ever asked a computer!
They don't need a phone number to sign up, just a gmail account will do.
Then I guess that's where you've lost me, and why I'll never understand: I don't ask computers questions, never have, and I don't see myself doing it ever. Computers aren't things that can answer questions. Computers and software are tools, I see them in the same way I would see a screwdriver, knife or can opener. They're complex and they can do a lot more than a can opener, but at the end of the day, they're dumb tools, and I don't ask my knife the best way to cut an onion.

If you can use it as a tool in your work, then that's great and you should do it. I think we just need to be careful not to integrate them into anything and everything.
 
Just try R1 and ask it the hardest questions you've ever asked a computer!
Damn, it really can produce something that looks remotely useful right from the start. Wondering
now if i have what it takes to guide it through writing a gui w/lvgl for e-bike display for me. I _hate_
writing GUIs:rolleyes:, but unsure if i hate writing to AI even more:unsure:
 
Back
Top