

I’ve personally witnessed the seventh entry to this meme: lines of code over 1000 columns wide.
I try to forget, but the horror never fades.


I’ve personally witnessed the seventh entry to this meme: lines of code over 1000 columns wide.
I try to forget, but the horror never fades.


Does anyone want to even guess what time this clock is supposed to be indicating?


I like vimdiff, since it’s fair quick to collapse and expand code chunks if you know the keyboard shortcuts. Actually, since it’s vim, knowing the keyboard shortcuts is the entire game lol.
I usually have vimdiff open in a horizontal pane in tmux, then use the other horizontal pane to look at other code that the change references. Could I optimize and have everything in a single vim session? Sure, but at that point, I’d also want cscope set up to find references within vim, and I’m now trivial steps away from a full IDE in vim.
… which people do have, and more power to them. But alas, I don’t have the luxury of fastidious optimization of my workflow to that degree.


In my personal workflow, I fork GitHub and Codeberg repos so that my local machine’s “origin” points to my fork, not to the main project. And then I also create an “upstream” remote to point to the main project. I do this as a precursor before even looking at a code on my local machine, as a matter of course.
Why? Because if I do decide to draft a change in future, I want my workflow to be as smooth as possible. And since the norm is to push to one’s own fork and then create a PR from there to the upstream, it makes sense to set my “origin” to my fork; most established repos won’t allow pushing to a new topic branch.
If I decide that there’s no commit to do, then I’ll still leave the fork around, because it’s basically zero-cost.
TL;DR: I fork in preparation of an efficient workflow.


For a link of 5.5 km and with clear LoS, I would reach for 802.11 WiFi, since the range of 802.11ah HaLow wouldn’t necessarily be needed. For reference, many WISPs use Ubiquiti 5 GHz point-to-point APs for their backhaul links for much further distances.
The question would be what your RF link conditions look like, whether 5 GHz is clear in your environment, and what sort of worst-case bandwidth you can accept. With a clear Fresnel zone, you could probably be pushing something like 50 Mbps symmetrical, if properly aimed and configured.
Ubiquiti’s website has a neat tool for roughly calculating terrain and RF losses.


I don’t have an answer for your woes, but MTU issues are notoriously difficult to investigate and mitigate, as Cloudflare found out: https://blog.cloudflare.com/increasing-ipv6-mtu/


If you’re using SLAAC for auto IP assignment, then the resulting EUI-64-based address would be essentially static, based on the premise that your MAC address and local subnet prefix don’t change. Privacy extensions night get in the way, as well as Android’s randomized MAC feature, but those are adjustable.


Concrete example of threat modeling: if someone found out I was using Signal, for any reason at all, would that cause problems for me?
If yes, then Signal is not a good option. If no, then Signal may be appropriate. Why? Because in their documentation, they explicitly state that while messages are confidential, the fact that you’re using Signal cannot be hidden, and so they don’t make that guarantee.


Tbf, can’t the other party mess it up with signal too?
Yes, but this is where threat modeling comes into play. Grossly simplified, developing a threat model means to assess what sort of attackers you reasonably expect to make an attempt on you. For some people, their greatest concern is their conservative parents finding out that they’re on birth control. For others, they might be a journalist trying to maintain confidentiality of an informant from a rogue sheriff’s department in rural America. Yet others face the risk of a nation-state’s intelligence service trying to find their location while in exile.
For each of these users, they have different potential attackers. And Signal is well suited for the first two, and only alright against the third. After all, if the CIA or Mossad is following someone around IRL, there are other ways to crack their communications.
What Signal specifically offers is confidentiality in transit, meaning that all ISPs, WiFi networks, CDNs, VPNs, script skiddies with Wireshark, and network admins in the path of a Signal convo cannot see the contents of those messages.
Can the messages be captured at the endpoints? Yes! Someone could be standing right behind you, taking photos of your screen. Can the size or metadata of each message reveal the type of message (eg text, photo, video)? Yes, but that’s akin to feeling the shape of an envelope. Only through additional context can the contents be known (eg a parcel in the shape of a guitar case).
Signal also benefits from the network effect, because someone trying to get away from an abusive SO has plausible deniability if they download Signal on their phone (“all my friends are on Signal” or “the doctor said it’s more secure than email”). Or a whistleblower can send a message to a journalist that included their Signal username in a printed newspaper. The best place to hide a tree is in a forest. We protect us.
My main issue for signal is (mostly iPhone users) download it “just for protests” (ffs) and then delete it, but don’t relinquish their acct, so when I text them using signal it dies in limbo as they either deleted the app or never check it and don’t allow notifs
Alas, this is an issue with all messaging apps, if people delete the app without closing their account. I’m not sure if there’s anything Signal can do about this, but the base guarantees still hold: either the message is securely delivered to their app, or it never gets seen. But the confidentiality should always be maintained.
I’m glossing over a lot of cryptographic guarantees, but for one-to-one or small-group private messaging, Signal is the best mainstream app at the moment. For secure group messaging, like organizing hundreds of people for a protest, that is still up for grabs, because even if an app was 100% secure, any one of those persons can leak the message to an attacker. More participants means more potential for leaks.


When I see E2EE and XMPP mentioned, I think of this blog post by Soatok, outlining some very odd cryptographic choices in XMPP + OMEMO: https://soatok.blog/2024/08/04/against-xmppomemo/
I would very much like to see a richer playing field than just Signal for private messaging, but it’s a tough nut to crack. For exactly which aspect that turns me away from XMPP for E2EE, I think this nails it down:
you only need check whether OMEMO is on by default (it isn’t), or whether OMEMO can be turned off even if your client supports it (it can).
When the competition is Signal, these sorts of details matter a lot.


You might consider posting to !indiegaming@lemmy.world as well
Having previously been on the reviewing side of job applications, if you have GitHub/Codeberg repos with your work, please, please, please include those links somewhere on the resume, ideally spelled out and also clickable in the PDF. It’s a neat trick to showcase more work than what fits on a page.
Although the non-technical recruiters might gloss over links, the technical reviewers very much look at your code examples. Why? Because seeing your coding style and hygiene, Git workflow and commit messages, documentation, and overall approach to iterative improvement of a codebase is far more revealing than anything that AI-nonsense coding tests can show.
So while this won’t necessarily get your resume past the first gate, always be thinking about the different audiences whom your resume might be passed around to, within the prospective organization you’re applying to.
I use LibreOffice has my word processor, and no substantial amounts of automation to speak of. And each time I intend to submit a resume, I save off a new copy and tailor it specifically for the recipient employer. After all, what’s relevant and worth highlighting (not literally!) to one employer won’t be the same as for another.
Yes, I’m aware that a lot of recruiters/reviewers use LLMs as a first-pass filter, but that’s precisely why my submission should be crafted by hand each time: if it’s an LLM, then I want its checkbox exercises to be easily met, and if it’s a human, I want to put my best foot forward.
In days of yore, where paper resumes were circulated by hand to prospective employers at career fairs, having a bespoke resume for each would have been difficult to pull off. But with PDF submissions, there’s no reason not to gear your submission to exactly the skills that a company is looking for.
To be clear, tailoring a resume does not mean adding fake or hallucinated qualifications that you do not possess. Rather, it means that you copyedit the resume so that your relevant skills are readily apparent. If you already listed an example project from a prior employer or internship, but a different project would better align to the prospective employer, consider swapping out the example for max appeal. Bullet-points are particularly easy to rearrange: if you have web-dev skills and that’s desirable by the employer, those should be moved up the list of bullet-points. And so on.
Although resumes are now mostly PDFs, the custom remains – both as an informal fairness criteria between applicants, but also because it would be more to read – that one’s resume should fit on a single sheet of US Letter or A4 paper, barring unique exceptions like professors that have long lists of published papers or systems architects that hold patent numbers. And so the optimization problem is how to most effectively use the space on that sheet of digital paper.


If only one side of the switch/points remain, depending on the type of crossing and condition of the wheels, there’s a chance that the trolley’s right side wheels can jump over the switch and continue straight forward, even as the switch is set to diverge onto the non-existent siding.
Or it could derail but continue barreling forward anyway. But trolleys don’t tend to be going that fast.


Let me make sure I understand everything correctly. You have an OpenWRT router which terminates a Wireguard tunnel, which your phone will connect to from somewhere on the Internet. When the Wireguard tunnel lands within the router in the new subnet 192.168.2 0/24, you have iptable rules that will:
So far, this seems alright. But where does the service run? Is it on your LAN subnet or the isolated 192.168.2.0/24 subnet? The diagram you included suggests that the service runs on an existing machine on your LAN, so that would imply that the router must also do address translation from the isolated subnet to your LAN subnet.
That’s doable, but ideally the service would be homed onto the isolated subnet. But perhaps I misunderstood part of the configuration.


+1 because this is a much more concise description of free vs open source, the exact obligations of the (A)GPL license, and of use vs distribution, than what I’ve written in the past vis-a-vis proposals of non-free licenses like SSPL and Futo.


I did indeed have a chuckle, but also, this shouldn’t be too foreign compared to other, more-popular languages. The construction of func param1 param2 can be found in POSIX shell, with Bash scripts regularly using that construction to pass arguments around. And although wrapping that call with parenthesis would create a subshell, it should still work and thus you could have a Lisp-like invocation in your sh script. Although if you want one of those parameters to be evaluated, then you’re forced to use the $() construction, which adds the dollar symbol.
As for Lisp code that often looks like symbol soup, like (= 0 retcode), the equal-sign is just the name for the numerical equality function, which takes two numbers. The idea of using “=” as the function name should not be abnormal for Java or C++ programmers, because operator overload allows doing exactly that.
So although it does look kinda wonky for anyone that hasn’t seen Lisp in school, sufficient exposure to popular codebases and languages should impart an intuition as to how Lisp code is written. And one doesn’t even need to use an RPN calculator, although that also aids understanding of Lisp.


im not much of a writer, im sure its more clear from AI than if i did it myself
Please understand this in the kindest possible way: if you were not willing to write documentation yourself, why should I want to want review it? I too could use an AI/LLM to distill documentation rather than posting this comment but I choose not to, because I believe that open discussion is a central tenant of open-source software. Even if you are not great at writing in technical English, any attempt at all will be more germane to your intentions and objectives than what an LLM generate. You would have had to first describe your intentions and objectives to the LLM anyway. Might as well get real-life practice at writing.
It’s not that AI and LLMs can’t find their way into the software development process, but the question is to what end: using an AI system to give the appearance of a fully-flushed out project when it isn’t, that is deceitful. Using an AI system to learn, develop, and revise the codebase, to the point that you yourself can adequately teach someone else how it works, that is divine.
With that out of the way, we can talk about the high-level merits of your approach.
how the authentication works: https://positive-intentions.com/docs/research/authentication
What is the lifetime of each user’s public/private keypair? What is the lifetime of the symmetric key shared between two communicating users? The former is important because people can and do lose their private key, or have a need to intentionally destroy the key. In such instance, does the browser app explicitly invalidate a key and inform the counterparty? Or do keys silently disappear and also take the message history with it?
The latter is important because the longer a symmetric key is used, the more ciphertext that a malicious actor can store-and-decrypt later in time, possibly in the future when quantum computers can break today’s encryption. More pressing, though, is that a leak of the symmetric key means all prior and future messages are revealed, until the symmetric key is rotated.
how security works: https://positive-intentions.com/blog/security-privacy-authentication
I take substantial notice whenever a promise of “true privacy” is made, because it either delivers a very strange definition of privacy, or relies upon the reader to supply their own definition of what privacy means to them. When privacy is on offer, I’m always inclined to ask: privacy from whom? From network taps? From other apps running in the same browser?
This document pays only lip service to some sort of privacy notion, but not in any concrete terms. Instead, it spends a whole section on attempting to solve secure key exchange, but simply boils down to “user validates the hash they received through a secure medium”. If a secure medium existed, then secure key exchange would already be solved. If there isn’t one, using an “a priori” hash of the expected key is still vulnerable to hash attacks.
this is my sideproject and im trying to get it off the ground
I applaud you for undertaking an interesting project, but you also have to be aware that many others have also tried their hand at secure messaging, with more fails than successes. The blog posts of Soatok show us the fails within just the basic cryptography, and that doesn’t even get to some of the privacy issues that exist separately. For example, until Signal added support for username, it was mandatory to reveal one’s phone number to bootstrap the user’s identity. That has since been fixed, but they go into detail about why it wasn’t easy to arrive at the present solution.
am i a cryptographer yet?
I recall a recent post I saw on Mastodon, where someone who was implementing a cryptographic library made sure to clarify that they were a “cryptography engineer” and not a cryptographer, because they themselves have to consult with a cryptography regarding how the implementation would work. That is to say, they recognized that although they are writing the code which implements a cryptographic algorithm, the guarantees comes from the algorithm itself, which are understood by and discussed amongst cryptographers. Sometimes nicely, and other times necessarily very bluntly. Those examples come from this blog post.
I myself am definitely not a cryptographer. But I can reference the distilled works of crypgraphers, such as from this 1999 post which still finds relevancy today:
The point here is that, like medicine, cryptography is a science. It has a body of knowledge, and researchers are constantly improving that body of knowledge: designing new security methods, breaking existing security methods, building theoretical foundations, etc. Someone who obviously does not speak the language of cryptography is not conversant with the literature, and is much less likely to have invented something good. It’s as if your doctor started talking about “energy waves and healing vibrations.” You’d worry.
I wish you the very best with this endeavor, but also caution as the space is vast and the pitfalls are manifold.


Aiming to create the worlds most secure messaging app
For anyone else that was looking for it, this is the link to the threat model: https://positive-intentions.com/docs/research/threat-model/
That said, it seems quite thin on hard details, such as how identities (ie usernames) are managed – eg are they unique? How can users cross-check an online identity to a real person? Fingerprints? QR codes? SHA256 hashes? – and whether they are considered publicly-exchangeable. Plus how users are bootstrapped so they can find each other.
While a threat model is the minimum to even beginning an assessment of anything that utters the word “security”, I do have to ask:
For my own networks, I’ve been using IPv6 subnets for years now, and have NAT64 translation for when they need to access Legacy IP (aka IPv4) resources on the public Internet.
Between your two options, I’m more inclined to recommend the second solution, because although it requires renumbering existing containers to the new subnet, you would still have one subnet for all your containers, but it’s bigger now. Whereas the first solution would either: A) preclude containers on the first bridge from directly talking to containers on the second bridge, or B) you would have to enable some sort of awful NAT44 translation to make the two work together.
So if IPv6 and its massive, essentially-unlimited ULA subnets are not an option, then I’d still go with the second solution, which is a bigger-but-still-singular subnet.