Knowledgebase development fundraiser for 2026

neptronix

Administrator
Staff member
Joined
Jun 15, 2010
Messages
22,162
Location
Utah, USA
Howdy yall.

Our knowledgebase software is complete but we realized once we hooked it up to ES, that a few things are lacking and requires more development time to get the software from 'good enough' to 'good'.

Here is the list of todos:
- the cheap WYSIWYG editor struggles with advanced formatting and images in some situations. A much better open source editor exists lately and passes our user friendliness requirement and seems actively developed, so it should fix this well.

- we need a comment notification system and display of number of comments on an article both for readers/editors

- we need to make summaries way better, right now they do a very bad job of tempting the user to read an article. This one is hard because we want to make it automatic for the editor and it may involve either using AI or do intensive processing of HTML / JSON content

So far there is some interest in editing, but our editors keep hitting new bumps with our WYSIWYG editor, so contribution is still limited and i think it's imperative to improve that first.

See: ES seeks Knowledgebase editors!

I've put in a few grand to bankroll continued development since we last raised about $3k.
Why? it's a donation to the forum, and also, i want the software to eventually become a badass piece of open source.

But i'm wondering if anyone would be willing to donate again?

If so it'd be awesome to have this software in a very good state for the busy season ( spring - fall ).
Let me know if i can get a few hands.

Edit: donations are open here: https://ko-fi.com/es_kb_fundraiser
 
Last edited:
10% of the way there already. Thanks to those who donated.

I decided to pay for part of the new WYSIWYG editor integration. That should leave a little extra $ from this funding round to smooth some additional corners.

This WYSIWYG editor will also be added to the comment system so that readers can leave much better comments. Right now the comment entry is pretty spartan.

Keep it coming!
 
This might be a stupid question as I am not a software or web developer, but will the knowledgebase by crawlable/searchable by search engines like Google and Bing? It seems like that could be a benefit and bring more new people to the forum.
 
This might be a stupid question as I am not a software or web developer, but will the knowledgebase by crawlable/searchable by search engines like Google and Bing? It seems like that could be a benefit and bring more new people to the forum.

Absolutely. Right now, zero consideration has been put into this and we have no idea how well it works.
Once we have a decent amount of content, this becomes high priority.
But we have to nail the editing and collaborating process before we can expect a growth in the rate of articles added.

Speaking of computer readability, it would be interesting to find a way to optimize the article output for AI understanding. We could use this understanding to make an extremely good search engine which could direct you to the right articles ( instead of sloppily trying to generate a full answer ). I think by the time we get to it, the cost of running AI will be reasonable. ( My Nvidia 5090 just keeps getting better at coding assistance for example, due to the software getting better. That card could handle this task today ).

But first things first :cool:
 
Hello folks.

For some time i have thought about adding AI generated article summaries for our knowledgebase articles since the default 'take the article and chop it off' methodology turns out to really suck. But until now, the output sucked, and/or it cost too much money.

I have a test system with a Nvidia 5090 and with today's AI technology i can finally get enough power and intelligence to do things like:
- generate ( almost ) senior developer level code to help speed up our programming. I have already used this to write AI scraper defenses successfully for ES
- generate summaries of things
- find obscure bugs in our code we would have never even considered
- help me write documentation with a 50/50 mix of human : robot labor

baby-AGI-2.0.jpg

Here is a picture of our baby AI machine. With a power supply relocation, it's ready to accept another big GPU.

Recently i discovered a technique to overstuff a larger AI model into this card and the results are outstanding for such meager hardware.
More nerdy details here:
ES seeks Knowledgebase editors!

The only problem is that while we just hit the intelligence level we need, only 1 person can use it as a time because we don't have the memory.
So it works but can't be used much.

I tacked on $1000 to the amount we want to raise because we need to buy a big $8000 card that can:
- do AI powered spam detection ( since it's what's necessary to counter AI generated spam )
- write KB article summaries
- help us write more code to make ES better per man-hour than we can currently pay for. Our goal is to produce 2x the code per hour by the end of the year, and we think this is possible for ES.
- who knows what else, but we promise to only use it tastefully instead of wantonly.

I will cover the cost of electricity for ES' use. Can ES chip in for GPU too?

Screenshot_20260217_171343.jpg
 
Sorry, but i need to be annoying.

The cost of the AI hardware we need is on a steep upward trend over the last 2 weeks.
We have raised $325 so far, but the cost of this hardware we need has gone up $500 over 2 weeks. And i project it to keep going up.

We need our first $1000 of this year's fundraising goals to help pay for one of these:

1771892432507.png

If you haven't donated yet.. could you help us get this first $1000 together so we can order one before it ( probably ) becomes out of reach?
 
Are you wanting to use computer hardware to help write a Knowledgebase? Couldn't you use giant computers that are connected to the internet?

AI Overview

AI computers (specialized, high-powered servers and clusters) are primarily located in
massive, centralized data centers operated by major technology companies and specialized cloud providers. These facilities are heavily concentrated in specific regions that offer high power availability, robust cooling, and favorable regulatory environments.
 
I saw this recently and thought I'd share:

Probably not better than the gpu you posted except that it can run bigger models (more ram and you could get two for the same price, making 256gb total) and that it consumes way less power. Other than that I'm sure the GPU would be way faster. Seemed like an interesting device though, especially if one didn't already have the computer and setup to use such a GPU on.
 
After playing with localhosted LLMs for a year, i have some opinions about this:

Unfortunately these all in one AI boxes have the memory to run large models but none of the speed.
20 tokens/sec becomes 10 tokens/sec when the context window is filled.
If you are also using agentic software, expect to take a 2-5x speed hit on top.
Once we get below 20 tokens/sec, we enter the territory where it would be faster to do the work yourself.
You must run a substantially dumber AI model to get good speed, and few people would be happy with that. They could have ran that on a more affordable GPU.

Example of this performance difference:

G99eiz1WMAERHtq.jpeg

To add insult to injury, if you attempt to buy multiple units and paralellize them, you'll get a very low contribution per unit. It becomes quickly uneconomical to scale up by building a 'cluster'.

These products demo well but aren't ideal for any serious usage.

If you want some 'starter hardware', the below is your best value. You can combine this with some LLM model offloading to DDR5, and successfully evaluate a big 200b model. It will run at a crawl, but at least give you a taste.
https://www.amd.com/en/products/gra...o/ai-9000-series/amd-radeon-ai-pro-r9700.html
Unfortunately paralellization on these is not good yet, but that might change in the future.

Better to make a $1300 mistake than a $4000 mistake 😅
Or even cheaper is just to rent ( if you don't require privacy )
I expect the next generation of AI hardware to be closer to affordable to the masses. But we don't get it until 2027.

If you want to learn more about running LLMs on your own hardware, this is the best community i've found for that:
https://www.reddit.com/r/LocalLLaMA/
 
Last edited:
Back
Top