The key here is tested daily. The last time I wanted some sample data api it took me way too long to find one because all the ones that were recommended to me were deprecated. It's really tough to keep a free api up, because there's no incentives. You can't sell ads on it.
Maybe you could! Perhaps a sample data API that creates users like "John Chocolate Oreos," address "100 Pack Street", age 19.99, email "visit<at>oreo.com"
I don’t think Oreo would want to commit ad spend to that—it would perform worse than the display network, and worse than the Outbrain/Taboolas of the world.
(This doesn't apply to every API in the list, but) having made the mistake of using a public API (that later went offline) for examples in a book of mine, never again.
These days I keep an API deployed on a subdomain I control.
Not sure which specific example they're referring to, but another example is that, according to tradition, the prophet Isaiah was martyred by placing him inside a hollow tree that was then sawn down. This appears to be referenced by the Apostle Paul in Hebrews 11:37 even though there is no description of Isaiah's death in the Old Testament:
"They were stoned, they were sawn asunder, were tempted, were slain with the sword: they wandered about in sheepskins and goatskins; being destitute, afflicted, tormented;"
The possibility of an internet resource disappearing exists regardless of the resource type.
Running an API Forwarder that could act as a common target for all your APIs might work. Give it the request, and it passes it on, records success, optionally sends a notification if an endpoint that previously succeeded is now failing.
Add a fancy feature of redirection with format changing to handle replacing dead APIs with new ones transparently.
If anybody makes one of these, they should totally make it a free public API. I'd use it, and I'm not certain if that would just be ironically.
I did a similar deep dive to find all the specifically Music-related APIs that are available a few years ago. I've since moved on to other projects but maybe someone will find it useful / maybe OP can add the entries in my list to FreePublicAPIs.com!
{"ad":"Looking for someone to do yard work. Must have a hoolahoop. 760-555-7562","iseven":true}
{"ad":"FOR SALE - collection of old people call 253-555-7212","iseven":true}
{"ad":"Auto Repair Service: Free pick-up and delivery. Try us once, you'll never go anywhere again. Email dave57@qotmail.com","iseven":true}
$ curl https://api.isevenapi.xyz/api/iseven/9999999
{"error":"Number out of range. Upgrade to isEven API Premium or Enterprise."}
$ curl https://api.isevenapi.xyz/api/iseven/2e21
{"error":"what is this I don't even"}
Whoops, fixed and fixed! And regression tests added.
I had made a silly, silly mistake when I went to restrict numbers longer than 7 digits from the "free plan", I had at the same time disallowed digits larger than 7 as being "valid digits", so anything with an 8 or a 9 had been "invalid".
These come in handy in an instructional context—being able to have a simple predictable API that you can point students at so they can learn how to call an API and process its data.
Paraphrasing a known saying, "there's no API, it's just someone else's computer".
I wish programmers would internalize this. They often seem to take APIs uncritically, not questioning whose resources they're using and where does the data come from as well as its quality beyond obvious cases. APIs are leaky abstractions as all abstractions are.
I like this, not only does it let people find the APIs but having a collection like this encourages others to make APIs and to keep them running.
There's not a lot of incentive to create a service if you feel like no-one will ever know about it. I have had a few thoughts for services that I might have developed if I thought anyone would ever see it.
There's probably an argument for sponsorship here as well, not as a vehicle for advertising, but just companies paying for (or supplying resources) to cover the ongoing costs of the service as a public good. I wish I could make something that worked and could put it somewhere and commit zero mindspace to billing, server maintenance etc. and just have it keep running, forever.
Honestly, Web3 is partially it now. If you remove the hype and buzzwords, you are left with a bunch of individually addressable contracts that can all be composed and integrated together seamlessly under a single web frontend.
I remember years ago doing something as simple as getting a coin onto an alt-chain was a 20 step process. Get the gas coin on the target chain, find the right contract address, bridge the assets over, do a bunch of conversions, finally to get the token on the chain I wanted.
These days its a single click. Different tools or companies have sprung up, and while they use the same exact contracts I would have manually interacted with before, they script them all together into one transaction to save time and gas fees. A simple UI to select destination chain and coin and it handles all the swaps, gas, bridging, etc needed.
You don't need permission and a lot of standards have developed to keep things running smooth.
It's not just coins and crypto either. No reason not to develop other things off the chain too.
I've been working on my own PIM (personal information management) suite and using web3 has been amazing. No need to worry about a server or a database. Just deploy my code to the chain and store all my data encrypted on chain as well. The altchains are extremely cheap and this storage is already backed up, replicated, distributed, and can be local with a self-ran node. I will never lose it and I can bootstrap my data from scratch anywhere in the world as long as I can sync the chain or reach an API. I have todo lists, notes, contact management, and more in the pipeline and I've never felt so safe about my infrastructure or data before.
AWS is arguably an API, and I shovel a ton of money their way. Stripe is another one, though that ends with me getting paid, so I guess that technically doesn't count unless I start losing money using it. There's also the APIs backing all the apps I pay for, since I don't talk to them via a webpage. But then, if we look at every web page that hits an API to get a list of items to display, instead of that list coming over on the html itself, that'd be an API, technically. ChatGPT is an API I hit a bunch via non-web browser clients that I pay for.
I don't really understand why one would build an HTTP API for this. I can't imagine the database being more than a few megabytes, the whole thing could probably be served as a JSON file.
Small nitpick: How does this calculate the health score shown? I see a vague description in the about page, but I'm confused by the weights. For example the Guild Wars 2 API shows 95 prevent, but clicking for details shows 100 percent reliability. Perhaps it's the high latency that drops it's score by 5 percent?
This is really great overall. I like the interface on mobile, and search works great for me so far. Bookmarking for later.
Odd layout. After entering a search in the text field, the next element is a title: “Search for an API” and a subheading: “Searching for APIs…”. The results were below the fold on my phone. This title made me think the query was not yet submitted, so I’m sure I submitted the form a bunch of times.
Not so much for polling data, but some governments do allow access to coarse election data.
Australia's was enough that the unfortunately complicated math of their election system was repeatable and verifiable (Can't find the post about it, however)
Good point. However, in my experience messing around with choropleth maps, I often find myself resorting to web scraping in the end when I want something more current (opinion polling). Despite this data being seemingly everywhere, and companies often wanting you to use it (with attribution) so they can get advertising for other services, they never seem to have an API.
Polling is a fool's errand, in my experience, since I'd guess a ton of the "how about now?" responses are "nope"
IMHO sitemap.xml and/or ActivityPub are sleeper approaches to helping the sites with the adversarial relationship they have with scrapers. In my experience, there are two schools of thought: play the very expensive game of cat and mouse (expensive for BOTH sides) of trying to verify the human-ness of visitors, or make it so the inevitable bots don't eat up very expensive and valuable resources that can be spent serving actual humans
Imagine how much less nonsense would have to go on if Amazon had ActivityPub that CamelCamelCamel could subscribe to, or similar for Craigslist
I'm not saying "all information wants to be free," so if Amazon wants to focus its energies on hoarding the review content, since that's arguably its moat, fantastic - let the bot and anti-bot people war over that content, instead of arguably factual data found on the product listing pages
Maybe there is a language issue here. In Australia, a full on election is also called a poll. The government electoral commission does the polling, the voters do the voting.
I think you getting confused with opinion polling.
Ah, that's quite possible - since it was a thread about APIs, I interpreted the question as "a way to programmatically await data arrival" - the verb "polling" in API context is repeatedly invoking a status endpoint to ask if new information is available either in absolute terms or in terms of since the last request
I stand by my rant :-D but obviously stand corrected on the question being asked
I tried to build something years ago on a similar foundation, and I found maintaining the API list to be impossible as a solo dev. In a pool of say 30 APIs, at least one would break their "contract" as a public resource daily. Shifting endpoints, revoked public tokens, changing outputs.
I was quite disappointed because I loved the product, a dashboard for arbitrary live data sourced from APIs, but the cost of maintaining it was too high.
I would say that would be my use case for LLMs. These should be easily fixed by automation that can reason about documentation and could spit out code fixes.
Of course I assume there is documentation that is updated before new changes go live which might be too much to ask :)
It's interesting engineering problem, I wouldn't imagine LLMs as they currently are could work directly on the whole codebase without breaking it just as often as the APIs. But perhaps you could have it maintain connectors/interfaces for each individual API, such that it can get one very wrong and not ruin the whole program.
You could even have its success depend on a test suite, so that it iterates until the tests pass.
For “API list” that only has tests to be fixed, something like shifting endpoint should be fixable with tests + LLM.
So that’s the idea I proposed that single dev with automation and LLM should be able to maintain “API list” but maintaining any code that depends on the API I expect is above LLMs.