Xenforo conversion thread

Oh wow, this icon pack is gold. The battery icons are better than the ones i picked out of another free set. They even have a bicyclist. Luckily i have just enough talent to fill in the gaps ( example: a hub motor icon ). I think the art department just got a huge upgrade.


1729199928539.png
 
Bad news. Google decided to stop indexing our pages for some reason. After making our forum grow like nuts,

1729208713709.png

Working on a fix!
 
Your icon timeline reminded me of Moof the dogcow.
DogCow_from_LaserWriter_8.png


 
Lol.. yeah i miss the corny stuff from the 90-2000's.. Apple had some great designs too!


Re being deindexed on Google, i see evidence that my server protection was set too strict. I adjusted it downwards last night, and the change seemed to be effective. I see a trickling of google crawling happening.

I believe i fixed the problem, but it's too early to say, because Google hasn't tried to reindex us yet. Will check in on this next week.
 
Google indexing seems to have restarted over the weekend. I would guess we're 1-2 weeks out from being back on Google fully. Cheers!

1729532196833.png
 
Howdy!
Looks like we've got a slow crawl to recovery, so it was more like i expected initially.

1729890099788.png

My adjustment was still too strict. There's still some rare google bot banning. Analyzed google bot's behavior and adjusted around it. I think we will be good after that.

Will report back on the effort next week!
 
Morning. Checked in on things.
Halfway to recovery.

Had to loosen our DDOS protection further, noticed an instance of googlebot being banned since the last check. This is responsible for the temporary slowing of upward motion in the graph.

1730303083744.png

The DDOS protection i designed in 2015 was effing great until Google broke it's ability to be tuned strict by deciding they were special and don't have to follow the robots.txt rules.

I'm not happy about it but i need to replace the technology next year so that we don't have another incidence of this.

Anyway, still super busy for me at work, i should be able to take a breather and give ES more TLC next month.
 
Trend line on the above problem continues to go upwards. Will check in in detail next week, but i'm pretty sure we're good. 60% recovery so far.


Loosening the DDOS protection to let google hammer us has allowed other bots to do the same. Xenforo handles this with grace but requests to ES v1.0 via PHPBB for some reason doesn't close TCP/IP sockets or processes fast enough and because of this, the old site being heavily indexed causes the server to stutter intermittently.

Aka very likely a nightmare to debug.

The old site and new site need to be split into two virtual machines long term. But this would involve increasing our hosting costs.


I've been working on a thing i call project baby datacenter. It will allow us to run ES on our own hardware and slash the hosting bill to 33% of that of AWS, while still providing the subset of features of AWS we use. This will mean that if i need to do something like split old/new ES into two VMs, i don't have to think about the cost implications or engineer around them, i just do the thing. Our bandwidth would also cost a lot less.

Best of all we can just dump ES into a virtual machine file and run it on baby datacenter.. no server reconfiguration needed, and really small downtime. Badass.

Will work on Baby Datacenter v1.0 over the winter. It will look approximately like this and sit in a colocated space:

2x optiplex del 203 mount.jpg
I have news this week that our knowledgebase system is complete and is in the stage where it needs a lookover + CSS adjustment before it can be deployed. I would say that it also should be approved by our stakeholders ( you guys ) & adjusted one last time after that.

More later!
 
Trend line on the above problem continues to go upwards. Will check in in detail next week, but i'm pretty sure we're good. 60% recovery so far.


Loosening the DDOS protection to let google hammer us has allowed other bots to do the same. Xenforo handles this with grace but requests to ES v1.0 via PHPBB for some reason doesn't close TCP/IP sockets or processes fast enough and because of this, the old site being heavily indexed causes the server to stutter intermittently.

Aka very likely a nightmare to debug.

The old site and new site need to be split into two virtual machines long term. But this would involve increasing our hosting costs.


I've been working on a thing i call project baby datacenter. It will allow us to run ES on our own hardware and slash the hosting bill to 33% of that of AWS, while still providing the subset of features of AWS we use. This will mean that if i need to do something like split old/new ES into two VMs, i don't have to think about the cost implications or engineer around them, i just do the thing. Our bandwidth would also cost a lot less.

Best of all we can just dump ES into a virtual machine file and run it on baby datacenter.. no server reconfiguration needed, and really small downtime. Badass.

Will work on Baby Datacenter v1.0 over the winter. It will look approximately like this and sit in a colocated space:

View attachment 361630
I have news this week that our knowledgebase system is complete and is in the stage where it needs a lookover + CSS adjustment before it can be deployed. I would say that it also should be approved by our stakeholders ( you guys ) & adjusted one last time after that.

More later!
I'm dipping my toes into learning about all this, just for my own education. Can you tell me, what are the specs on that above machine?
 
That would be a pair of Dell Micros with Intel 14500T processors ( 14 cores, performance cores hitting a top speed of 4.8ghz ) and 96gb of ram each with a 1TB RAID 1 array. One is there as a redundancy in case the other fails. The disks can be removed and plopped in the other & it will be up and running in under 5 minutes.

2 of these, new, is 5x cheaper than equivalent server hardware brand new ( & you only get one unit.. ).

ES uses 4 ~3ghz cores and 16GB of memory.
 
Yeah, we use cloud backup for a secondary, backblaze.
They beat my costs due to way higher scale, and that's ok!
 
That would be a pair of Dell Micros with Intel 14500T processors ( 14 cores, performance cores hitting a top speed of 4.8ghz ) and 96gb of ram each with a 1TB RAID 1 array. One is there as a redundancy in case the other fails. The disks can be removed and plopped in the other & it will be up and running in under 5 minutes.

2 of these, new, is 5x cheaper than equivalent server hardware brand new ( & you only get one unit.. ).

ES uses 4 ~3ghz cores and 16GB of memory.
Again, I'm a total newb, so apologies if I missed it... but does that mean you're personally hosting the website hardware at your house?
 
No, it would be moved to a colocation facility who has redundant internet connections, power sources, and a dude on hand to do a physical hardware operation if needed.

I have a baby datacenter at my house also. I'm using a pfsense router to provide failover to a 4G LTE connection, and huge battery backups.
 
Hey!

I have great news about Zeropress!
I went through, did a good portion of the CSS cleanup, found visual bugs, etc.
I encountered one bug related to polling Xenforo for the login state.

You can also resize the side menu on the left... for if you ever get into too deep of a hierarchy. Just like Windows Explorer.
Code highlighting works in display mode but not edit mode. Getting it to work in edit mode is a nightmare, so we're leaving the rough edge in.

The amount of revision needed is very small. We are SO damn close to done.

WIP:

1731032435045.png

1731032833660.png
 
Will work on Baby Datacenter v1.0 over the winter. It will look approximately like this and sit in a colocated space:
Many moons ago I also colo'd a 4u in a DC for my own purposes. It can definitely cost a lot less in cash, but the cost in time, and more importantly downtime, was significantly higher. AWS fails an instance, you can just ax it and bring a new one up in, what, 10 minutes or less? If that machine eats it (and it will, eventually) the downtime is potentially days.

I'm sure you're weighing the pros and cons, just tossing in 2C.

I wonder if ES could be registered as a non-profit and find some cheaper/freer hosting options.
 
That's why i have a fleet of identical hardware to hot swap a disk into. Turns a hardware failure into a 5 minute downtime event.
Disk failure is my main problem, but that's very rare in the age of the SSD. You can run a mondo redundant disk array over a fast interconnect and do the same thing you do on AWS - spin up another instance and connect the disk to it. All with free, open source software that came about in the last few years.

So now the value proposition of AWS is close to shot for any resource hungry application that can tolerate a few hours a year of cumulative downtime. And our system is fairly resource hungry due to it being the longest lasting, and still largest ebike forum on the internet.

I've been running development and file sharing boxes on commercial internet for a long time now and with the right battery backup, good hardware, and good configuration, the single internet connection is the only significant point of failure. These machines tend to run consistently for years without intervention. At the OS level they are designed to be self maintaining. I don't find the maintenance to be burdensome or expensive - and it might be because i automate most of it and also cherry pick and test the hell out of hardware before i put it in the field.

ES is not a business critical service; we can stomach 99.97% uptime instead of 99.99% uptime.

Owner of the site is insistent on the current arrangement vs being a nonprofit. I thought about that too. instead, i just find efficiencies when i can to keep operations costs low. My way of saying thanks for saving this site 13 years ago. :)
 
Howdy yall.

Search engine situation is improving very slowly, this week we went from 60% indexed to 66% indexed. Not as fast as we'd like, but at least the trend is positive.


With Zeropress, we hit a stumbling block with creating a good differential view of article changes. Doing it right is quite nasty and pre-existing libraries don't seem to be helping.

The bang/buck to relevancy ratio for ES is really low. We will probably settle on a github-like display of code changes and just min-max this part because it will be mostly nerds using it, and that's okay. We can improve it later.

Also..
A picture is worth 1000 words so here's how i would portray the current mobile support:

1731693821380.png

IE, it's a mixture of good looking and completely broken.

For ES i think this feature is mission critical so that one day you could pop open a virtual chilton's service manual to help you fix and diagnose your bike in the field. The software is useless for many people w/o that ability.

I will assess the mobile situation and see what mr. kahuna has up his sleeve next week & we will report back on whether we get to set a release date or if the cake still needs baking.

In the software industry, we have a joke that the last 10% of a program takes 90% of the time. By that logic i'd say i don't know if we're at 95% or 99% done.
 
That would be a pair of Dell Micros with Intel 14500T processors ( 14 cores, performance cores hitting a top speed of 4.8ghz ) and 96gb of ram each with a 1TB RAID 1 array. One is there as a redundancy in case the other fails. The disks can be removed and plopped in the other & it will be up and running in under 5 minutes.

2 of these, new, is 5x cheaper than equivalent server hardware brand new ( & you only get one unit.. ).

ES uses 4 ~3ghz cores and 16GB of memory.
If Power costs are something of an issue you might keep an eye out for an AMD based server system or 2.


I NB that the only real difference between an AMD Ryzen workstation CPU and an Epyc server CPU is simply the voltage and frequency at which they run.
In workstations they push the volts up to inefficient levels to get faster core clocks, for advertising more than any other reason really.
(Most games still like one fast core over many cores)

As one can 'undervolt' and underclock most workstation systems to the efficiency levels seen in their server CPUs, it opens the door to far cheaper options.
 
I operate ES out of part of the USA that has very cheap power, residential at 11cents/kwhr, commercial, less, and the machine ES will run on one day averages 30 watts on the load ES will place on it; the CPU is basically an Intel i7 running at slightly lower clocks.

The colocation facility i want to move to bills us a cheap flat rate for a rack spot, power and internet concluded. We could put some big intel space heater hardware in that rack and they wouldn't even notice.

1732220233835.png
 
Back
Top