There’s an open PR you can follow here: https://github.com/dessalines/jerboa/pull/514
There’s an open PR you can follow here: https://github.com/dessalines/jerboa/pull/514
I love the new icons, it’s much easier for me to immediately identify which communities are beehaw vs not-beehaw in jerboa.
I’m currently working on making it so that fediverse links opened in Jerboa will open in Jerboa. After that I think we could see about how to support that “add more links” setting in the UI.
We just released a big new update to Jerboa that adds a lot of much needed features and polish. We had 14 new contributors too!
So far it’s been good! Lemmy has made me hopeful for better social media. I’m not hugely into twitter-style social media so I was never really able to appreciate Mastadon.
I’m actually quite surprised with how much content is here already. There are regular posts and conversations, and a good mix of content. It’s not at the level reddit is in terms of volume, but I don’t feel starved or anything. I look forward to the future here!
Infinite scrolling is implemented in jerboa, it could definitely be brought to the web client.
There’s an open PR that’ll fix the font size issue. I’m using it now and it’s great. I’m also personally working on trying to add my personal must-have UI options from Boost.
Image hosting seems like a fairly expensive endeavor, especially if your anticipated user base is just linking to your server from another site. I have a hard time thinking this could be done sustainably without requiring some sort of subscription on the uploader’s end, unfortunately.
I imagine it’ll be possible in the near future to improve the accuracy of technical AI content somewhat easily. It’d go something along these lines: have an LLM generate a candidate response, then have a second LLM capable of validating that response. The validator would have access to real references it can use to ensure some form of correctness, ie a python response could be plugged into a python interpreter to make sure it, to some extent, does what it is proported to do. The validator then decides the output is most likely correct, or generates some sort of response to ask the first LLM to revise until it passes validation. This wouldn’t catch 100% of errors, but a process like this could significantly reduce the frequency of hallucinations, for example.
I wholeheartedly support this
Consider charging at home, if you can. If your typical driving patterns consist of driving <100 miles from your home and it’s possible to plug in at home (a standard 120V outlet is sufficient typically), then you don’t need public charging stations. Just plug your car in at night and it’ll be full every morning.
This is due to poor error handling in the API client code, triggered by the server returning some sort of error. There’s an open issue but it hasn’t been taken up yet.