This sounds like it would break a bunch of email address verification systems, password recovery links and the like. I wonder if indeed it does break them, but since it only affects smaller websites nobody seems to care.
I've manually tested it and seen the token consumed when clicking the link via gmail but had no issues when copying the link from the password reset email to a gmail account. A second manual tester confirmed the same, as have multiple support cases.
Password recovery links sporadically fail for gmail users. I had to add extra instructions to copy and paste rather than click through the link and am in the process of moving away from single-use tokens because a lot of people still click before reading those instructions and email me for support.
My increased customer support burden isn't something Gmail PMs worry about, but they may whitelist some larger service's emails.
Instead of copy and paste you could have a POST form on your site to trigger the actual reset (with a hidden field pre-populated from the params of the email link).
Gmail and others won’t touch it. They assume a GET is free from side effects and that it is safe to load your link because of that.
Yes, please ruin functionality without javascript for the sake of gmail's nosiness.
Comment about a form and PUT/POST is good - it will work by standards in any browser, even when gmail starts executing javascript. Add auto-submit on top javascript if preferred.
It is against Apple policies to tell the truth.. How is this not market manipulation? They are purposefully misleading the market on a grand scale.
The whole "you are free not to buy from someone else " doesn't work if you are being lied to. I believe that my rights as a consumer are being violated by this practice.
It is manipulation - Facebook were not allowed to disclose transparently the 30% Apple tax.
There's now enough antitrust cases against Apple that I'm hopeful they're going to lose in a few serious jurisdictions and be forced to change their ways.
The risk of this is already in their SEC filings, so they know it might happen. Funny they disclose the whole truth when it's legally required, but don't let others tell the truth when it's inconvenient.
At some point, some large app needs to stand up against Apple. Fortnite was a start, but that's a game; it doesn't "hit home" in Silicon Valley culture.
I still have hope that Facebook will pull something major in response to the ATT changes. To be clear: I support the goals of ATT, and I think Apple is in the right on it. But I think freedom of distribution on iOS is more important than this, and when choosing between Freedom and Privacy (or as our forefathers would say, Freedom and Safety); I will choose Freedom, every single time.
Apple should not be forcing users to choose. There do exist non-resolvable dichotomies when one has to choose, but this is not one of them.
> it doesn't "hit home" in Silicon Valley culture.
Are people in SV not aware? I was under the impression that they are acutely aware but are being paid gigantic sums of money to put aside any concerns and opinions.
Still ongoing, and epic are getting some good documents out through discovery, which should help their case, as well as other overseas antitrust investigations.
Cut the gordian knot. Discord should just release the iOS application themselves and bypass the Apple walled garden.
It is time to end corporate feudalism and the first step is installing applications the way they were meant to be installed instead of through one specific corporate gateway.
How do you suggest they do that? Any enterprise certificate they use to sign the .ipa would be revoked within hours, probably minutes. Telling users to sign it themselves (and re-sign it every 7 days) just isn't feasible.
The only alternative would be not to have an iOS app, but that would also be a significant blow to their business. I can't, in good conscience, blame them for complying here.
They should maintain 2 branches for iOS. The full featured branch that users who have rooted their iPhone can download and install and a Apple-Safe gimped version for the iOS app store.
On non-jailbroken iOS devices the only way to install apps is through the App Store or through some convoluted methods which involve having a developer account (which you must pay yearly for), and I believe the % of jailbroken iOS devices is much much lower than the % of people willing to install Android APKs outside of Google Play, anyway. Sure, they _could_ do that for few people who have jailbroken iPhones, but doing that is pretty much like giving up on their iOS app altogether.
That expire after seven days, just like if you signed them with XCode. And you can only install two or three apps at once, and the wireless linking rarely works.
Apple just needs to have a version of "fastboot oem unlock" (or, for that matter, "csrutil --disable") where I can sideload applications with a big warning on the lock screen/splash screen that says the device is not secure. Besides, as long as the code is running inside the sandbox, who cares?
Look, if CPUs were better at memory latency, the BVH-traversal of raytracing would still be done on CPUs.
BVH-tree traversals are done on the GPU now for a reason. GPUs are better at latency hiding and taking advantage of larger sets of bandwidth than CPUs. Yes, even on things like pointer-chasing through a BVH-tree for AABB bounds checking.
GPUs have pushed latency down and latency-hiding up to unimaginable figures. In terms of absolute latency, you're right, GPUs are still higher latency than CPUs. But in terms of "practical" effects (once accounting for latency hiding tricks on the GPU, such as 8x way occupancy (similar to hyperthreading), as well as some dedicated datastructures / programming tricks (largely taking advantage of the millions of rays processed in parallel per frame), it turns out that you can convert many latency-bound problems into bandwidth-constrained problems.
-----------
That's the funny thing about computer science. It turns out that with enough RAM and enough parallelism, you can convert ANY latency-bound problem into a bandwidth-bound problem. You just need enough cache to hold the results in the meantime, while you process other stuff in parallel.
Raytracing is an excellent example of this form of latency hiding. Bouncing a ray off of your global data-structure of objects involved traversing pointers down the BVH tree. A ton of linked-list like current_node = current_node->next like operations (depending on which current_node->child the ray hit).
From the perspective of any ray, it looks like its latency-bound. But from the perspective of processing 2.073 million rays across a 1920 x 1080 video game scene with realtime-raytracing enabled, its bandwidth bound.
I had heard about this giant JSON from friends in the GTA V modding community. OP's idea of what it is used for is right. My guess is that this JSON was quite smaller when the game released and has been increasing in size as they add more and more items to sell in-game. Additionally, I speculate that most of the people with the knowledge to do this sort of profiling moved on to work on other Rockstar titles, and the "secondary team(s)" maintaining GTA Online throughout most of its lifespan either didn't notice the problem, since it's something that has become worse slowly over the years, or don't have enough bandwidth to focus on it and fix it.
It's also possible they are very aware of it and are saving up this improvement for the next iteration of GTA Online, running on a newer version of their game engine :)
Still much better than e.g. Ubisoft which repainted safety borders from red to blue and removed shadows in TM2 Canyon few years after release, also breaking a lot of user tracks. (If you’re not sure why, it was before a new iteration of the game)
Thank you! I never thought to look for that option.
I've actually reduced my gimp usage because I find myself hovering over ever tool, repeatedly, because the new icons are too cryptic and low-information. I switched back to "Legacy", though "Color" might work as well (they just don't seem as understandable at first glance, though perhaps I'm biased towards the familiar here.)
I was under the impression that those obfuscation methods were exclusive to certain YouTube partners, including the RIAA members. If youtube-dl stopped supporting that method, it would still be a useful tool for the bulk of its use cases and the RIAA would no longer have any leg to stand on since it'd no longer be able to download their members' videos.
That would only move the problem. That plugin would still need a Git repository, an issue tracker, tests, and an update mechanism. Or are you trying to say you don't care if this specific part of yt-dl gets deleted from the internet by RIAA?
It decouples the thorny part of yt-dl from the mainline and reduces risk of complaints in the future.
I do care if this part gets deleted, that's why I think it should be hosted somewhere more reliable than GitHub. There are other options which aren't as polished, but may be better for hosting risky code like this, including self hosting.
In my experience many videos have it, not necessarily ones associated with the RIAA (or perhaps the overzealous[1] content detector just thinks there's some of their content in there), so it's definitely necessary to decode the algorithm for youtube-dl to work on not just RIAA content videos.
IMHO giving the client both the key and the algorithm to decode the content should not count as any form of protection, but the lawyers don't care...
Breaking copy protection is only illegal if you do not have a license to the work. Removing the protection breaking code isn't necessary, and everyone needs to stop pretending that it is.
This same clause of the DMCA is the suspected reason for py-kms's reinstatement after a takedown: it's perfectly legal to break the Windows license scheme if you already own a license to Windows.
Since this affects only tests, they could easily change the relevant scripts to download a list of test videos at runtime. I bet the RIAA github-scrapers would not "see" it. Just serve statically a rot13-encoded list of URLs from pastebin or something, and Bob's your uncle.
Based on what I saw in past discussions, I'm pretty sure that the takedown was not a run-of-the-mill scraper-based takedown (it makes no sense to be taken down just for linking to videos which, at best, is what any scrapers would have seen in the original test code). It was very much an intentional, manual one with actual lawyers behind it.
There are multiple sides to "defending" a project like this. One of them is avoiding to trip the run-of-the-mill scrapers. The takedown was serious but we don't know what triggered the lawyers' attention in the first place. IMHO a simple runtime obfuscation would remove that particular attack vector, once coupled with some plausible deniability (i.e. deleting all downloaded data once tested). At that point, YTDL is still on RIAA's generic shitlist (which will require other mitigations to survive) but at least doesn't get flagged every week by a scraper.
MySQL not having a proper type to express time spans seems like a fault to me, and "poor design". Of course you can just use an integer for it, but that is a slippery slope, in the end you'll find that you can use strings or byte arrays for everything and you end up with no type system at all.
The surprise here is not that the type has limits but that they are so awkward and that there is no better strongly-typed alternative.
A missing feature is not the same thing as poor design. If you need time spans in your project it could be a fault, but every product has missing features and choosing a product that is missing features you want might be a poor decision - depending upon other tradeoffs you're making.
It seems like poor design to me, but more so on the .NET side than MySQL side. MySQL retaining backwards compatibility is sensible, though seems a bit awkward that they don't introduce a TIME_LONG type or similar that can hold more useful ranges. .NET just mapping a time span with a wider range down to MySQL's fairly limited range seems destined to cause problems.
The way I see it, "desktop-dependent workflows" overlap more and more with those "Windows-dependent workflows". Nowadays, most people don't need more than a web browser, and in that sense they do just fine with a tablet or phone (where iOS and Android dominate, and GNU/Linux is not really a viable option, at least not yet), or a Chromebook (yes, it's Linux, but it's mainly just Chrome).
Most people I know with a legitimate use for a desktop also have legitimate reasons to use Windows or macOS: gaming on Windows, content creation on Windows or macOS (graphics designing, video editing, ...), and MS Office lock-in. The only exception I can think of are developers, but even then, depending on the kind of development one is doing, using Windows or macOS may be the only choice.
I'm a software developer. I mainly work web and android. That,ofcourse, I can't do it on a smartphone and my Linux distro probably does it better than Windows. Installing and configuring CLIs and other development tools are much easier on Linux. I usually just install a linux distro as dual boot to my peers rather than figuring out how to get those stuff configured right on windows. There are ofcourse things like ASP.NET that are difficult to develop on linux because Visual Studio isn't available but fortunately, I'm able to choose my stack and avoid such cases most of the time.
The windows-dependent workflows doesn't mean that doing those things with Linux is impossible. An inspiring story was published here recently: https://news.ycombinator.com/item?id=21504721
Where they switched from Windows/Pagemaker/Photoshop to Linux/Scribus/GIMP for the complete process of designing and publishing a commercial newspaper. For video editing, there is kdenlive, openshot and blender.
GNU/Linux is definitely more than just a "web browser" OS and capable of doing most of the things that an average user does on windows.
I, too, develop for Android and, like you say, we have the "luxury" of being able to do it on both Windows and Linux (and macOS as well, if I wanted). Even .NET development is more cross-platform than ever, with .NET Core.
One of my points was precisely that developers were an exception in this regard, as our tooling is generally cross-platform. You can't say the same about people who do their work primarily using Adobe tools, for example.
And even developers sometimes don't have this luxury: for example, if you do iOS development, to publish on the app store, at some point you must use a Mac to sign the app. Of course you can use stuff like Xamarin and use the Mac exclusively to sign, but this is often inconvenient compared to just using the officially endorsed stack.
Overall, requiring a "traditional desktop operating system" to work is less and less the case for the general population.
I agree with you on the clipboard thing, but as with every other API that gives access to sensitive information, they could have put it behind a permission prompt. It wouldn't make the situation worse than it is now, and wouldn't annoy users and developers nearly as much.
What you are suggesting seems so obvious that I'm assuming they couldn't do this. Android Permissions on older targets essentially "grandfathers" in apps since older versions didn't need to write code to ask for each permissions at runtime (they asked for the entire set of permissions at install time). I imagine there was no way Google could effectively permit apps targeting older versions to run safely inside the new OS. So, they just took the hard path for developers and are forcing those that really need the permission to update their apps. I'd be unhappy too, but I wouldn't be entirely surprised, and I'd only be furious if I was doing something bad and couldn't anymore. There is going to be the new permission READ_CLIPBOARD_IN_BACKGROUND which will give me whatever I need, and let users know I'm using it, so if I'm willing to write the code, I have everything I need.