• henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 days ago

      I came here looking for this comment. They bought the service to destroy it. It’s kind of their thing.

          • lolcatnip@reddthat.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            21 days ago

            What has Microsoft extinguished lately? I’m not a fan of Microsoft, but I think EEE is a silly thing to reference because it’s a strategy that worked for a little while in the 90s that Microsoft gave up on a long time ago because it doesn’t work anymore.

            Like, what would be the purpose of them buying GitHub just to destroy it? And if that was their goal, why haven’t they done it already? Microsoft is interested in one thing: making money. They’ll do evil things to make money, just like any other big corporation, but they don’t do evil things just for the sake of being evil. It’s very much in their business interest to be seen as trustworthy, and being overly evil runs counter to that need.

  • varnia@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    20 days ago

    Good thing I moved all my repos from git[lab|hub] to Codeberg recently.

  • Lv_InSaNe_vL@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    21 days ago

    I honestly don’t really see the problem here. This seems to mostly be targeting scrapers.

    For unauthenticated users you are limited to public data only and 60 requests per hour, or 30k if you’re using Git LFS. And for authenticated users it’s 60k/hr.

    What could you possibly be doing besides scraping that would hit those limits?

    • Disregard3145@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      21 days ago

      I hit those many times when signed out just scrolling through the code. The front end must be sending off tonnes of background requests

    • chaospatterns@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      21 days ago

      You might behind a shared IP with NAT or CG-NAT that shares that limit with others, or might be fetching files from raw.githubusercontent.com as part of an update system that doesn’t have access to browser credentials, or Git cloning over https:// to avoid having to unlock your SSH key every time, or cloning a Git repo with submodules that separately issue requests. An hour is a long time. Imagine if you let uBlock Origin update filter lists, then you git clone something with a few modules, and so does your coworker and now you’re blocked for an entire hour.

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 days ago

      60 requests per hour per IP could easily be hit from say, uBlock origin updating filter lists in a household with 5-10 devices.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 days ago

    60 req/hour for unauthenticated users

    That’s low enough that it may cause problems for a lot of infrastructure. Like, I’m pretty sure that the MELPA emacs package repository builds out of git, and a lot of that is on github.

    • NotSteve_@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      21 days ago

      Do you think any infrastructure is pulling that often while unauthenticated? It seems like an easy fix either way (in my admittedly non devops opinion)

      • Ephera@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 days ago

        It’s gonna be problematic in particular for organisations with larger offices. If you’ve got hundreds of devs/sysadmins under the same public IP address, those 60 requests/hour are shared between them.

        Basically, I expect unauthenticated pulls to not anymore be possible at my day job, which means repos hosted on GitHub become a pain.

        • timbuck2themoon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          21 days ago

          Quite frankly, companies shouldn’t be pulling Willy nilly from github or npm, etc anyway. It’s trivial to set up something to cache repos or artifacts, etc. Plus it guards against being down when github is down, etc.

          • Ephera@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            20 days ago

            It’s easy to set up a cache, but what’s hard is convincing your devs to use it.

            Mainly because, well, it generally works without configuring the cache in your build pipeline, as you’ll almost always need some solution for accessing the internet anyways.

            But there’s other reasons, too. You need authentication or a VPN for accessing a cache like that. Authentications means you have to deal with credentials, which is a pain. VPN means it’s likely slower than downloading directly from the internet, at least while you’re working from home.

            Well, and it’s also just yet another moving part in your build pipeline. If that cache is ever down or broken or inaccessible from certain build infrastructure, chances are it will get removed from affected build pipelines and those devs are unlikely to come back.


            Having said that, of course, GitHub is promoting caches quite heavily here. This might make it actually worth using for the individual devs.

        • NotSteve_@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          21 days ago

          Ah yeah that’s right, I didn’t consider large offices. I can definitely see how that’d be a problem

      • Boomer Humor Doomergod@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 days ago

        If I’m using Ansible or something to pull images it might get that high.

        Of course the fix is to pull it once and copy the files over, but I could see this breaking prod for folks who didn’t write it that way in the first place

    • Xanza@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      21 days ago

      That’s low enough that it may cause problems for a lot of infrastructure.

      Likely the point. If you need more, get an API key.

      • lolcatnip@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 days ago

        Or just make authenticated requests. I’d expect that to be well within with capabilities of anyone using MELPA, and 5000 requests per hour shouldn’t pose any difficulty considering MELPA only has about 6000 total packages.

        • Xanza@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          20 days ago

          This is my opinion on it, too. Everyone is crying about the death of Github when they’re just cutting back on unauthenticated requests to curb abuse… lol seems pretty standard practice to me.

    • The Go module system pulls dependencies from their sources. This should be interesting.

      Even if you host your project on a different provider, many libraries are on github. All those unauthenticated Arch users trying to install Go-based software that pulls dependencies from github.

      How does the Rust module system work? How does pip?

      • UnityDevice@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        21 days ago

        Compiling any larger go application would hit this limit almost immediately. For example, podman is written in go and has around 70 dependencies, or about 200 when including transitive dependencies. Not all the depends are hosted on GitHub, but the vast majority are. That means that with a limit of 60 request per hour it would take you 3 hours to build podman on a new machine.

      • Ephera@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 days ago

        For Rust, as I understand, crates.io hosts a copy of the source code. It is possible to specify a Git repository directly as a dependency, but apparently, you cannot do that if you publish to crates.io.

        So, it will cause pain for some devs, but the ecosystem at large shouldn’t implode.

        • I should know this, but I think Go’s module metadata server also caches, and the compiler(s) looks there first if you don’t override it. I remember Drew got pissed at Go because the package server was pounding on sr.ht for version information; I really should look into those details. It Just Works™, so I’ve never bothered to read up about how I works. A lamentable oversight I’ll have to correct with this new rate limit. It might be no issue after all.

          • Ephera@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            20 days ago

            I also remember there being a tiny shitstorm when Google started proxying package manager requests through their own servers, maybe two years ago or so. I don’t know what happened with that, though, or if it’s actually relevant here…

      • adarza@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 days ago

        that is not an acceptable ‘solution’ and opens up an entirely different and more significant can o’ worms instead.

    • adarza@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 days ago

      i’ve hit it many times so far… even as quick as the second page view (first internal link clicked) after more than a day or two since the last visit (yes, even with cleaned browser data or private window).

      it’s fucking stupid how quick they are to throw up a roadblock.

  • John Richard@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 days ago

    Crazy how many people think this is okay, yet left Reddit cause of their API shenanigans. GitHub is already halfway to requiring signing in to view anything like Twitter (X).

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 days ago

      Not at all if you’re a software developer, which is the whole point of the service. Automated requests from their own tools can easily punch through this building a large project even one time.

    • douglasg14b@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      21 days ago

      60 requests

      Per hour

      How is that reasonable??

      You can hit the limits by just browsing GitHub for 15 minutes.