A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.

  • 4 Posts
  • 717 Comments
Joined 4 years ago
cake
Cake day: August 21st, 2021

help-circle





  • Good question. I was planning to start fresh as well. At least at some point. I think I’m going to first add the devices and do a better job documenting what I have, what firmware I modified how and pay attention to naming things in a coordinated manner, set the areas… And then think about what automations I need, what blueprints are available and newer methods to achieve the same thing. And throw overboard all the testing relics, HACS integrations and ESPhome configs and automations I don’t need anymore and for some reason keep around for reference. And then I’m bad at UI. I think I’d have to watch some Youtube tutorials to see how other people structure it in a sane way. I heard the bubble cards are popular these days.


  • Thanks. That sounds reasonable. Btw you’re not the only poor person around, I don’t even own a graphics card… I’m not a gamer so I never saw any reason to buy one before I took interest in AI. I’ll do inference on my CPU and that’s connected to more than 8GB of memory. It’s just slow 😉 But I guess I’m fine with that. I don’t rely on AI, it’s just tinkering and I’m patient. And a few times a year I’ll rent some cloud GPU by the hour. Maybe one day I’ll buy one myself.


  • Sure. I’m all for the usual system design strategy with strong cohesion within one component and loose coupling on the outside to interconnect all of that. Every single household appliance should be perfectly functional on its own. Without any hubs or other stuff needed.

    For self-contained products or ones without elaborate features, I kind of hate these external dependencies. I don’t want to miss my NAS and how I can access my files from my phone, computer or TV. But other than that I think the TV and all other electronics should work without being connected to other things.

    I mean edge computing is mainly to save cost and power. It doesn’t make sense to fit each of the devices with a high end computer and maybe half a graphics card to all do AI inference. That’s expensive and you can’t have battery-powered devices that way. If they need internet anyway (and that’s the important requirement) just buy one GPU and let them all use that. They’ll fail without the network connection anyway, so it doesn’t matter, and this is easier to maintain and upgrade, probably faster and cheaper.

    A bit like me buying one NAS instead of one 10TB harddisk for the laptop, one for the phone, one for the TV… And then I can’t listen to the song on the stereo because it was sent to my phone.

    But my premise is that the voice stuff and AI features are optional. If they’re essential, my suggestion wouldn’t really work. I rarely see the need. I mean in your example the smoke alarm could trigger and Home Assistant would send me a push notification on the phone. I’d whip it out and have an entire screen with status information and buttons to deal with the situation. I think that’d be superior to talking to the washing machine. I don’t have a good solution for the timer. One day my phone will do that as well. But mind your solution also needs the devices to communicate via one protocol and be connected. The washing machine would need to get informed by the kitchen, be clever enough to know what to do about it, also need to tell the dryer next to it to shut up… So we’d need to design a smart home system. If the devices all connect to a coordinator, perfect. That could be the edge computing “edge”. If not it’d be some sort of decentral system. And I’m not aware of any in existence. It’d be challenging to design and implement. And they tend to be problematic with innovation because everything needs to stay compatible, pretty much indefinitely. It’d be nice, though. And I can see some benefits if arbitrary things just connect, or stay seperate and there’s not an entire buying into some ecosystem involved.








  • Ja, das ist sicherlich richtig, die DDR wird sich in der Form nicht wieder manifestieren, oder sich irgendein Abschnitt der Geschichte so nochmal wiederholen. Das ist alles in einem größeren Kontext geschehen und die Welt ändert sich, und hat das auch sehr deutlich getan.

    Ich denke die anderen im Raum stehenden Fragen lassen sich auch recht einfach klären. Ich denke für die Menschen in einem russisch besetzten bzw von Russland eingenommenen Land wird es wahrscheinlich in eine ähnliche Richtung gehen wie es bei den echten Russen abläuft. Die Wirtschaft wird ziemlich abkacken aufgrund der ganzen Korruption und Oligarchen. Wenn man politisch eine abweichende Meinung äußern möchte zieht man besser um und lebt im Exil… Und wenn man Pech hat, ein Mann ist und grad Krieg geführt werden muss, wird man auf der Straße eingesammelt und irgendwohin an die Front geschickt. Vorzugsweise wenn man aus den ärmeren/unwichtigen Regionen kommt. Der Rest steht an der Tankstelle an um vielleicht mit Glück sein Auto wieder aufzutanken. Oder besser, man kommt ohne Auto aus, die sind ohnehin schlecht für die Umwelt…

    Das mit Putin und die Sowjetunion wiederhaben kommt nicht ganz aus der Luft, das hat er schließlich selber vor laufenden Kameras gesagt. Also diese “Behauptung” kommt letztendlich von ihm selbst. Es ist sicherlich etwas fraglich was daran ist. Allerdings meine ich mich zu erinnern, dass sowohl Staatsfernsehen als auch der deutsche Botschafter auf gleicher Linie sind. Das ist alles nicht sehr verwunderlich. Es gibt diverse Narrative, die von diesen Leuten erzählt werden. Ich denke es hängt auch mit dem Narrativ zusammen, dass sie keinen Krieg führen, weil ja alle ehemaligen Satellitenstaaten ohnehin technisch Russland gehören und sie deswegen damit/dort machen können was sie wollen.

    Ob es um eine einzelne Person geht, finde ich persönlich schwierig zu beantworten. Letztlich sind wir Deutschen und Russland ja über lange Zeit keine Feinde gewesen. Bis vor kurzem war unsere Strategie nämlich freundlich zu sein und uns über Handel anzunähern und zu verbinden. Keine Ahnung, ich fand’s ganz okay das Erdgas für die Heizung im Haus bei denen einzukaufen. Und soweit ich weiß gibt es viele Bodenschätze in diesem riesigen Land, einiges was wir auch gerne hätten. Und da ist Frieden und gute Beziehungen in unserem eigenen Interesse. Ab und zu habe ich auch mal was in Elektronik-Bastelforen im Internet gelesen was netterweise irgendwelche verrückte russische Hobbybastler geteilt haben. Selbst ich hier im tiefen Westen hab 'ne russische Partnerstadt, immer mal wieder (und gar nicht so selten) trifft man Leute, die Russisch in der Schule gelernt haben. Es gibt Deutsch-Russen mit denen wir hier zur Uni oder zur Schule gehen und ein kurzes Intermezzo aus der früheren (lang vergangenen) Zeit als die Region hier von Arbeitern, Kohle und Stahl geprägt wurde. Also ich kann mir beim besten Willen nicht vorstellen, das wir Russland gegenüber feindlich eingestellt sind. Und zwischenzeitlich hat das ja auch so halbwegs geklappt. Also offensichtlich nicht nachhaltig… Bizarrerweise hat auch Putin eine Verbindung zu Deutschland, spricht fließend unsere Sprache und ist früher ganz anders hier aufgetreten. Aber in jüngster Vergangenheit ist es damit definitiv vorbei. Und dann ist es ein wenig die Frage… Liegt es an dieser Person? Oder an etwas anderem… Aber woran dann? Den innenpolitischen Wandel Russlands von etwas nahe einer (defekten) Demokratie zu etwas das man beinahe Diktatur nennen kann, würde ich schon ziemlich direkt auf diese eine Person zurückführen.



  • I think they should be roughly in a similar range for selfhosting?! They’re both power-efficient. And probably have enough speed for the average task. There might be a few perks with the ThinkCentre Tiny. I haven’t looked it up but I think you should be able to fit an SSD and a harddrive and maybe swap the RAM if you need more. And they’re sometimes on sale somewhere and should be cheaper than a RasPI 5 plus required extras.



  • I’m a bit below 20W. But I custom-built the computer a long time ago with an energy-efficient mainboard and a PicoPSU. I think other options for people who don’t need a lot of harddisks or a graphics card include old laptops or Mini-PCs. Those should idle at somewhat like 10-15W. It stretches the definition of “desktop pc” a bit, but I guess you could place them on a desk as well 😉


  • You just described your subjective experience of thinking.

    Well, I didn’t just do that. We have MRIs and have looked into the brain and we can see how it’s a process. We know how we learn and change by interacting with the world. None of that is subjective.

    I would say that the LLM-based agent thinks. And thinking is not only “steps of reasoning”, but also using external tools for RAG.

    Yes, that’s right. An LLM alone certainly can’t think. It doesn’t have a state of mind, it’s reset a few seconds after it did something and forgets about everything. It’s strictly tokens from left to right And it also doesn’t interact and that’d have an impact on it. That’s just limited to what we bake in in the training process by what’s on Reddit and other sources. So there are many fundamental differences here.

    The rest of it emerges by an LLM being embedded into a system. We provide tools to it, a scratchpad to write something down, we devise a pipeline of agents so it’s able to devise something and later return to it. Something to wrap it up and not just output all the countless steps before. It’s all a bit limited due to the representation and we have to cram everything into a context window, and it’s also a bit limited to concepts it was able to learn during the training process.

    However, those abilities are not in the LLM itself, but in the bigger thing we build around it. And it depends a bit on the performance of the system. As I said, the current “thinking” processes are more a mirage and I’m pretty sure I’ve read papers on how they don’t really use it to think. And that aligns with what I see once I open the “reasoning” texts. Theoretically, the approach surely makes everything possible (with the limitation of how much context we have, and how much computing power we spend. That’s all limited in practice.) But what kind of performance we actually get is an entirely different story. And we’re not anywhere close to proper cognition. We hope we’re eventually going to get there, but there’s no guarantee.

    The LLM can for sure make abstract models of reality, generalize, create analogies and then extrapolate.

    I’m fairly sure extrapolation is generally difficult with machine learning. There’s a lot of research on it and it’s just massively difficult to make machine learning models do it. Interpolation on the other hand is far easier. And I’ll agree. The entire point of LLMs and other types of machine learning is to force them to generalize and form models. That’s what makes them useful in the first place.

    It doesn’t even have to be an LLM. Some kind of generative or inference engine that produce useful information which can then be modified and corrected by other more specialized components and also inserted into some feedback loop

    I completely agree with that. LLMs are our current approach. And the best approach we have. They just have a scalability problem (and a few other issues). We don’t have infinite datasets to feed in and infinite compute, and everything seems to grow exponentially more costly, so maybe we can’t make them substantially more intelligent than they are today. We also don’t teach them to stick to the truth or be creative or follow any goals. We just feed in random (curated) text and hope for the best with a bit of fine-tuning and reinforcement learning with human feedback on top. But that doesn’t rule out anything. There are other machine learning architectures with feedback-loops and way more powerful architectures. They’re just too complicated to calculate. We could teach AI about factuality and creativity and expose some control mechanisms to guide it. We could train a model with a different goal than just produce one next token so it looks like text from the dataset. That’s all possible. I just think LLMs are limited in the ways I mentioned and we need one of the hypothetical new approaches to get them anywhere close to a level a human can achieve… I mean I frequently use LLMs. And they all fail spectacularly at computer programming tasks I do in 30 minutes. And I don’t see how they’d ever be able to do it, given the level of improvement we see as of today. I think that needs a radical new approach in AI.