• 0 Posts
  • 815 Comments
Joined 1 year ago
cake
Cake day: February 5th, 2025

help-circle




  • Get a battery bank for your home to smooth out the demand curve.

    I’d advise anyone considering buying a battery bank to look at this one simple metric:

    Take the price of the battery bank, divide it by the total number of kWh that the battery bank will source to your electric devices over the battery bank’s lifetime. That should give you a figure of $ per kWh that you can compare with what you currently pay for electricity.

    Anytime I have run that exercise, the battery bank is costing me near or more per kWh than I pay for electricity from the grid, even if I am charging the battery for free (which is never the case, even with solar there’s a cost of installation and maintenance.)

    If you want the battery bank for grid independence, it’ll do that, but know there’s a cost.

    If you live in some crazy demand variable tariff area where you pay $0.20 more per kWh during peak than you do in off hours, the battery may make financial sense for you there, particularly if you can sell power back to the grid at peak tariff rates during peak hours.

    Most people: batteries are a waste of effort that don’t save you money, plus they have the added thrill of being a non-zero risk source of a signifianct fire.





  • “how does someone who isn’t proficient in bash tell whether the bash script that AI has generated is a good one or a bad one?”

    What I find most bash scripts to be lacking is consideration of error cases, edge cases, faulty inputs, etc. It’s pretty trivial to make a script to copy some files from here to there, but what if the source files are missing, what if the destination has write permission errors, what if the destination already has files with the same names?

    My latest Gemini script writing conversation started with “do this in a bash script” and it gave me a nice short script that did that. Then it asked about the edge cases, one by one, and if/how I wanted to handle them. 4/5 of its observations were relevant to the task and I told it to proceed with code to handle those (error out / show help / prompt for additional input / …), which it added with informative comments about what it was intending to do, and the other cases didn’t make sense for the larger picture (which I hadn’t explained to it, so no real fault there…)

    Yeah, it’s still bash glop, and that “shopt -s nullglob” is one of those things that I have to look up when I see it to be sure it does what I think it does, but if you have any reasonable understanding of bash scripts, this is one of the more readable bash scripts I have encountered. As a professional charged with creating the script - it’s your job to be sure it’s right, not the AI’s job, not any more than it was your text editor’s responsibility to get it right in the past - even with code completion tools. The AI is a tool that helps put something together for you efficiently, code-completion gone wild, but it’s no more responsible for that code than a chainsaw is responsible for where a tree falls.

    And when it all goes to shit, who will fix it if we have allowed human proficiency to wither away and die?

    8 billion of us are so far down that rabbit hole in so many areas, we’d better make sure it doesn’t all go to shit because if/when it does we’ll be lucky to have 800,000 humans surviving even 50 years after the SHTF.


  • rather than subordinating themselves to the chatbot.

    I find that a great many people prefer to subordinate themselves to “their boss” whoever or whatever that may be… it’s just so much easier than fighting for what you might believe “is right” but you are obviously powerless to fix.

    when I tried to use it on tasks that were beyond my own technical expertise, things got messy really quickly.

    And that’s the difficult thing to measure: is this task just annoyingly packed with detail and volume that you could work through if you spent the time and effort? (If so, AI could be a very useful tool) Or, is this task really beyond your understanding? In which case, you’re trusting the AI to fill in your blanks, which is irresponsible and today likely to fail - but in the future there will be a big grey area where the AI is usually “good enough” - but how can you tell? In computer coding, there’s a certain amount to be gained by having “independent” AI agents review the code and eventually reach consensus. In other areas, you can leverage AI to do what I have done in the past and teach yourself what you need to know in order to do what you’re trying to do. The question there is: how do you know when you have learned enough to actually “know what you are doing” well enough to do it successfully? There are far too many people in the world who are overconfident of their insufficient understanding of what they are messing with, and AI is like a gasoline spray fountain on their smoldering embers.

    I couldn’t tell what they were because the task was at or beyond the limit of my own technical competency.

    I feel like writing a “guide to AI development” is a bit futile at the moment because by the time you have written it and somebody reads it, the field will have evolved sufficiently to invalidate much of what you wrote. However, one thing that has remained constant over the past 6 months in my opinion is the need for visibility. Don’t just ask AI to design you a bridge with construction drawings. Ask it to show its work, include the structural analysis - equations, graphs of the solutions, references to standards - copies of the relevant parts of the standards, enough visibility and detail to spot its mistakes and oversights. In code this includes requirements, implementation plans, test plans, test execution results, traceability from the code to the requirements and tests.

    A couple of times, I managed to eventually figure out how to fix the error, but it was so exhausting

    I find that when I find and fix errors for AI (or junior programmers) it will often proceed to just make the same mistake again, even going so far as to overwrite my working solution with its faulty code again. If, instead, you work with it - Socratic method style - to find the issue, document what went wrong, and solve it for itself, it tends to repeat that particular kind of problem less in the future. Until you start a new project and don’t bring over the “memory files” from the old one…

    struggling through a complex code problem leaves me with a greater understanding of my domain, but I didn’t this time.

    I find it’s a bit of a mix in that respect. I “learned Rust” by having AI code in Rust for me. I certainly know more about Rust than I did when I started, I certainly have built bigger, more complex, and more successful projects with AI/Rust than if I had just started out plucking away at Rust the way I did BASIC in the 1980s… have I “learned Rust” better, or not as well, by using AI compared to if I had gone at it without AI? Is that even a relevant question? Rust is here, AI is here, it’s probably better, or at least more efficient, to learn how to code Rust with AI tools than it is to first learn Rust without AI and then learn all the pitfalls of using AI to code with Rust later… I’m sure if I invested 2000 hours learning Rust without AI I would know more about coding with Rust than I do after having invested 200 hours learning Rust with AI, but is that a comparison that’s even worth making?

    I did get a little better at prompting the AI

    That’s a thing that’s hard for me to really judge. Me making programs with AI has improved dramatically over the past 6 months, how much of that is the AI models improving? Clearly they are improving, but then, how much is me learning how to work more effectively with AI? I feel like the experience working with the inferior models has been valuable, because the methods I have developed to work with inferior AI models also help get better results from the newer models. If I had waited 12 months to jump in after the models had improved dramatically, I might not be as good at getting results from the superior models because they can at least make something functional with poor prompts, whereas the inferior models wouldn’t give you anything of value unless you were using them with some skills of specification, scope and refinement.

    the time savings from using an AI is negligible compared to the time savings from increasing my own proficiency.

    Increasing your own proficiency is an investment well worth making, but after 40 years of coding experience, I find that AI is saving me significant time and effort beyond anything I’m likely to “learn better” before I die. Mostly what AI is good at, for me, is doing the voluminous detail documentation, unit test coverage, reviews for consistency. In development (of anything) there’s a tension between single source of truth, don’t repeat yourself, and copious examples, unit tests, redundancy of information to ensure that things don’t get off-track when you’re not looking at them. AI doesn’t do it automatically, but you can direct it to constantly review the redundant information for consistency and then fix the unwanted deviations to get back in line with your intent.


  • I wanted to share it as I experienced it: I also was given a bare link with little introduction. It’s short enough that I feel like too much introduction would amount to spoilers. The story was written 23 years ago, but its opening describes a future that I feel might be just a few years out with our “AI assistants” telling us what to do, how to do it, etc. It goes on to describe a couple of alternate futures resulting from this technological advancement.

    Where today’s AI tech is different from the story is that today’s tech is flaky enough that professional experts in their fields still have to “push back” to get good results out of the AI tools, tell them when they’re wrong, guide them to better / more preferred solutions. What’s… new this past 12 months as opposed to the previous 50 years of AI development is that the tech is steadily and rather quickly advancing - to the point that it feels like it might be able to implement “Manna” within just a couple more years.


  • so when I choose something to listen to, I am either picking with or against what’s already playing.

    AcousticBrainz has (had?) a bunch of dimensional measures of various qualities of a song. How I used it was to first define a set of maybe 4 to 8 songs to “set the mood” and then pick a list of a few hundred songs that were “closest to” those songs in all the AB dimensional measures, pre-filtering out artists and songs recently played. Then - the final step was to sort the remaining candidates by their similarity to the songs most recently played. It was still random within that list, but a weighted random with the most similar (by AB measurements) songs most likely to be queued up next.


  • What works for me is to have a pool of thousands of “songs I like” - but then you’ve got the mood problem: Metallica or Sarah McLaughlin? That’s what AcousticBrainz was good at: picking through my collection for similar songs and playing that “mood” from the pool of songs I’ve already indicated I like by including them in the available list to choose from.

    Where it excelled was at finding the outliers, like the relatively quiet Metallica song that fits with the current set.







  • Sam is still early, and obnoxious, but I’ve been monitoring AI progress since the 1980s. Roughly one year ago, AI coding agents sort of turned the corner from not really any more useful than a Google search (which is, itself very useful), into getting things right more than they hallucinate. That was an important watershed, because from that point they could make forward progress, fixing more mistakes than they made.

    In the 12 months since, there has been steady and rapid forward progress. If you haven’t asked an AI to code something for you in the last 3 months, you’re out of touch with where it’s at today.

    Even free Gemini rips out really good bash scripts faster than you can look up the first weird thing you want it to do.