New rule against AI use on ES

It's a way to specify what kind of AI, yeah.
LLM usually = one that outputs text.

Others create speech, video, actions, etc.

Sorry, i run two tech companies and it's hard to turn industry slang mode off sometimes.
 
np makes me look stuff up always something new to learn.
later floyd
 
- you vet, edit, and humanize the response so that it doesn't have unnatural language and represents what you want to say

When I read the thread you linked, I wasn't consiously aware of this rule, and I was thinking along the lines off: this is a framework, it's formatted using one of the llm's but it's been vetted OR it will get vetted from this point on. I honestly did read it, glaringly, and I didn't see the dangerous mistakes. I was more looking at the formatting I guess, and fantasizing about a process where this mythical ( it will stay that way untill it's functional which means both operational and 'filled' ) knowledgebase would get a jump start by letting an llm's put up frameworks which then get corrected/vetted/expanded on by people with enough first hand knowledge.. you could mark unvetted concepts as such, which would get the attention from those who could vet things in that particular field of expertise.

Now, that's totally not thinking of the reason why those bots actually are here, which is malicious and not trying to be helpfull ;)

I have seen more of these users lately. So much non information, lots of fluff around it though. And pretty often the subject of discussion has a link to some item.. that's what I thought most of these where, link spammers.

So when I see something which doesn't look like link spam, it's a new one the intent could still be malicious because potentially dangerous misinformation is presented in a manner which makes it look like it's trustworthy information.
 
the intent could still be malicious
Using AI to lie. In many cases, the desire will be to gain social credit by appearing to be an authoritative voice - we value story tellers now, it is a valid social niche. The majority of people will not know if the 'information' presented is worth crediting or not, and the bar is very low.

Frank Herbert wrote Dune in the 60s, and in it he posited a past "Jihad" (his word choice) against the use of intelligent machines, such that computation was done by humans, selected and trained for the role.

I have wondered what might have led to such an event, but the rise of AI use and it's potential to lie and flood the world with crap by malicious and just plain stupid users is leading me to think that such an event could occur

There is a basis for it. AI permits stupid and malicious people to appear as knowledgeable and gain followings. It may be that the only way to prevent the damage done by such people is to prohibit AI. Stupid and malicious aren't really controllable.
 
I mean, spam bots been around for ages. But using AI to first try to gain 'stature' or 'exposure', I mean I see it constantly on social media to influence public opinion but on a forum like this it's just a quaint concept to me.

Considering how effective 'they' are in influencing public opinion through social media manipulation, it shouldn't be such a surprise people would try to use it in other manners.
 
Using AI to lie. In many cases, the desire will be to gain social credit by appearing to be an authoritative voice - we value story tellers now, it is a valid social niche. The majority of people will not know if the 'information' presented is worth crediting or not, and the bar is very low.
It’s worse than that. Students are using it so much, that teachers are switching to handwritten homework assignments to at least stop the cutting and pasting. Teacher have tools to detect AI, but they are getting less effective. At the same time, more than half of the highschoolers can’t even read at elementary school level. Yet they are experts in reading and writing text messages. Might be interesting to ask a question that AI provides a nonsensical response, then using that in a homework assignment and see what gets turned in.
 
Students are using it so much, that teachers are switching to handwritten homework assignments to at least stop the cutting and pasting.
Don't worry, there's ways around that too.

 
I mean, I would have used a plotter instead off a 3d printer but I agree with the notion 😇

It's a good thing I'm not a student anymore. Then again, when I was I wasn't always as mischievous as I feel I am now ;)
 
Nationally? Or locally in some urban district?
Locally, talking to teachers, but the AI use in schools is everywhere.
Don't worry, there's ways around that too.
No doubt. Handwritten is harder for students, but if they are just copying AI, then harder for teachers to detect because they have more steps to convert it to digital to use the detection tools, so can’t really stop it.

Good thing we’re paying for stuff electronically now. Could you imagine buying with cash and waiting for the cashier to figure out your change by counting their fingers?
 
I honestly did read it, glaringly, and I didn't see the dangerous mistakes. I was more looking at the formatting I guess, and fantasizing about a process where this mythical ( it will stay that way untill it's functional which means both operational and 'filled' ) knowledgebase would get a jump start by letting an llm's put up frameworks which then get corrected/vetted/expanded on by people with enough first hand knowledge.. you could mark unvetted concepts as such, which would get the attention from those who could vet things in that particular field of expertise.

I also got excited reading that post, hey, maybe this person will donate good knowledge to the KB.. that illusion faded halfway through the post.

Problems with the LLM produced post:
- This article proposes to help prevent battery fires, but only touches on 1 aspect out of maybe 7..
- There is no such thing as a catch-all expectation for safe operating range in terms of mOhm because this varies A LOT per cell
- No mention of IR increasing as a function of temperature or cycles, only 2 factors that increase it are mentioned
- Says 'always check the data sheet' but goes on to mention fixed values for IR anyway
- Weight vs capacity is irrelevant detail
- Weight and energy density don't have a correlation like that, these values are completely fabricated
- 'higher C rates generate more heat' - relative to what? what if i have a 20C cell?
- Increased C rate does not increase the internal resistance of the cell.

The article is clearly written without a holistic understanding and includes many details not relevant to the topic.
Hallmark of unedited LLM output that will amount to a time waster.

I have seen more of these users lately. So much non information, lots of fluff around it though. And pretty often the subject of discussion has a link to some item.. that's what I thought most of these where, link spammers.

We would love it if you reported things like that when the case is obvious.
We have a high standing SEO position and people found out that they can effectively abuse it.
We want those accounts nuked as fast as possible.

We are probably going to have to build some specialized tools to find them automatically because it's becoming more common.
 
Nep ya know that this is the internet? There are 8,205,697;123 people and who knows how many AI computer robots looking this. Don't think they all know what a LLM is, According to AI, LLM is:

AI Overview
Learn more

LLM can refer to a Master of Laws degree or a large language model.


Master of Laws (LLM)
  • A graduate degree in law that allows lawyers to specialize in a specific area of law
  • An advanced professional degree for those already admitted to legal practice

  • A way for international lawyers to gain exposure to U.S. or Canadian law systems

  • A way for lawyers to gain international qualifications if they earned a law degree outside the U.S. or Canada
Please stop confusing the confused computers and stupid humans like myself. [Just joking I am so smart I don't need a smart phone. Got a flip phone.]
 
Good thing we’re paying for stuff electronically now. Could you imagine buying with cash and waiting for the cashier to figure out your change by counting their fingers?
Total came to $29.26. My goal in life is to have more dollars and less change. Always looking for treasure, I look at the floor hoping to find a penny to simplify the transaction. She said don't worry about the penny. Give cashier a quarter, nickle, twenty, and a ten dollar bill. She refused to take my nickle. She proceeds to type $30.25 into her register. SHE GIVES ME .99 cents! I try to give her the .99 cents plus my nickle. At this point I would be happy to trade $1.04 in coins for a paper dollar. She gets calculator and has no idea how to figure this out. Thanks for the food. I go home with more change in my pocket.

This is why I like the do it yourself checkout. No humans. Usually I use a plastic card. On the subject of plastic cards. I want 2.5% of everybody's money for everything. The merchant pays about 2.5% fee to collect money from plastic cards.
 
Last edited:
Total came to $29.26. My goal in life is to have more dollars and less change. Always looking for treasure, I look at the floor hoping to find a penny to simplify the transaction. She said don't worry about the penny. Give cashier a quarter, nickle, twenty, and a ten dollar bill. She refused to take my nickle. She proceeds to type $30.25 into her register. SHE GIVES ME .99 cents! I try to give her the .99 cents plus my nickle. At this point I would be happy to trade $1.04 in coins for a paper dollar. She gets calculator and has no idea how to figure this out. Thanks for the food. I go home with more change in my pocket.

This is why I like the do it yourself checkout. No humans. Usually I use a plastic card. On the subject of plastic cards. I want 2.5% of everybody's money for everything. The merchant pays about 2.5% fee to collect money from plastic cards.
If you can do the most basic algebra, say solve for "x":

2x + 3 = 11

...and if you can answer "x = 4" without the aid of a calculator or any pre-made program, then you're in the upper tertile of Americans mathematically.

Let the implications of this sink in. Weep for your future. The Flynn Effect is a lie.
 
Last edited:
Problems with the LLM produced post:
- This article proposes to help prevent battery fires, but only touches on 1 aspect out of maybe 7..
- There is no such thing as a catch-all expectation for safe operating range in terms of mOhm because this varies A LOT per cell
- No mention of IR increasing as a function of temperature or cycles, only 2 factors that increase it are mentioned
- Says 'always check the data sheet' but goes on to mention fixed values for IR anyway
- Weight vs capacity is irrelevant detail
- Weight and energy density don't have a correlation like that, these values are completely fabricated
- 'higher C rates generate more heat' - relative to what? what if i have a 20C cell?
- Increased C rate does not increase the internal resistance of the cell.

How about use of Lithium-ion, Lithium polymer as chemistries, Lithium Polymer isn't used in 18650 cells afaik.
1) all Lithium based batteries are Lithium-ion batteries.
2) Lithium Polymer refers to a lithium battery with a polymer electrolyte. This is not a chemistry. unless a new element has been named Po oh ya there is one already Po stands for Polonium A rare and highly radioactive metal with no stable isotopes. IF a cell was produced with the chemical make up of LiPo it would be damaging to the users health.
Later floyd
 
You're scaring the children...
Their future in retails sales already took a big hit when the Dollar Store started selling things for more than a dollar. Now it requires upper level math due to all the confusion different prices bring. Fortunately with a college degree, they may be able to work the cash register in a normal sore, especially since they are getting rid of pennys.
In the end it will all work out, since as customers, they won’t know if they got the right change anyway.
 
i've worked in retail since 1995, and despite how i can't deal with math...i am far better at it (and most other things) than the vast majority of teenyboppers and the like that have worked*** at these places.

***most of them don't actually want to work, so they typically just stop showing up...some of them call in sick for a while first, but many don't even bother. Some keep showing up but they don't do anything but stand there (these days they're texting or browsing on their phones to help them ignore the world around them, but they (this type of kid) were pretty good at it even before that was a thing). Work ethic...used to be a thing, doesn't really seem to be much these days.
 
The problem will be short lived at the current pace. Not long from now, we’ll be complaining about how the AI robots, serving you fast food at the drive thru, are demanding you add a 20% tip to your total, because they need to ba able to make an artificial living wage.
 
Seems like it would take more computing power to detect AI as it does to generate AI responses/posts. For school work, tools seem like they can simply scour the web to detect plagurism, but not necessarily the quality of the information.
I suspect AI more when a post comes from a new account, or old account that never has had any post, so maybe tools that scrutinize those types of posts, rather than ones posted by established members that post regularly would direct the computing power more towards the right population.
 
You'd use a ton of computing power and the accuracy level would be too low to be useful.
That's what most big organizations seem to have concluded so far.

Right now you can't use a computer to generate quality information without some significant portion of human intervention.. with this same technology, you can't judge quality of information without human intervention being added. ( Google is still figuring this out, because AI powered crap is flooding their search results )

The best bet is still a human eye and brain making the interpretation.

I agree, that problems usually occur within a new user's first posts. Some interface for quickly viewing these would be helpful, but we don't have a big enough problem to justify it yet.
 
We have our first major LLM spam:
Avoid " F I R E" - A little knowledge can save your life

This post contains some incorrect information, some which is dangerously incorrect. This is unedited LLM output. The poster clearly didn't vet it for accuracy, or maybe they didn't understand the topic well enough to do that.

These kinds of posts are cancer. ES is at or near the top of google searches for small EVs. If those search results are polluted by good looking, but dangerously incorrect information, this has an outsized effect on misinforming the internet.. and that erodes this website's usefulness.

What should i do?
Don't post LLM output as if you wrote it in the first person. Always credit the LLM that wrote it.
Always vet it for accuracy, we don't need artificial stupidity on this website on top of natural stupidity 😅

Members: please report this kind of post whenever you see it.
Moderators: move these to the dumpster and wag your finger at the offender
I don’t get what the point is of an AI post like they did. It’s not for advertising or to be intentionally malicious, but you can also easily look up the information in the post.
Why “create” it in the first place?
 
It's a way to specify what kind of AI, yeah.
LLM usually = one that outputs text.

Others create speech, video, actions, etc.

Sorry, i run two tech companies and it's hard to turn industry slang mode off sometimes.
I'm glad you switched to LLM. Having done a fair amount of Artificial Intelligence work in the 90s, the use of the acronym "AI" these days gets my blood pressure up. IMO, AI these days is a PR conduit for companies like NVIDIA to up their stock price. Everyone gloms on the word based on what they see and hear and...bingo it's a thing to worry about. Kind of like "swarming UAVs" and "intelligent agents". My own personal criteria for artificial intelligence is that it has to have the capability to modify its behavior based on external stimuli of unforeseen events. It has the ability to gain knowledge and operate within constructs that were never originally envisioned at inception...similar to what was presented in Terminator. In this behavior, it is also subjected to viruses much in the same manner as humans where it's behavior can be corrupted through a number of false or surreptitious learning events. This currently does not exist in its entirety with what is know as AI today.
 
I don’t get what the point is of an AI post like they did. It’s not for advertising or to be intentionally malicious, but you can also easily look up the information in the post.
Why “create” it in the first place?

Haha.. that's what i think when someone raw-posts a LLM output.
It's not your words, don't know if you vetted it for accuracy or not, or what the prompt was so i can reproduce it.. it could be a big time waster... and usually is.. so why should i read it?

And there isn't even a good motive like you said. Except to maybe make the person seem smarter than they actually are ( but this mirage won't last long )

My own personal criteria for artificial intelligence is that it has to have the capability to modify its behavior based on external stimuli of unforeseen events. It has the ability to gain knowledge and operate within constructs that were never originally envisioned at inception...similar to what was presented in Terminator. In this behavior, it is also subjected to viruses much in the same manner as humans where it's behavior can be corrupted through a number of false or surreptitious learning events. This currently does not exist in its entirety with what is know as AI today.

Same here, love the comment about viruses being equivalent with ideologies in effect, so true
I think we have a bit of time before we have to build the weapons to fight the robot apocalypse.
 
Back
Top