It's also great to see so many products/projects and companies using it successfuly in production!
I see a bunch of questions related to "how FastAPI compares to X", FastAPI was built from the learnings from other awesome tools, and is built on top of great packages. You can read a lot more about it here: https://fastapi.tiangolo.com/alternatives/
Thanks for FastAPI. It's a super exciting project.
As a very long time Django developer I see the potential for it to finally replace Django in jamstack-type apps. But two things are direly missing for that to be the default IMO:
1. A flexible admin. Django's admin allows for super fast prototyping. I wrote an accounting/bookkeeping app recently which I used for myself, and I didn't write a UI for it at all for several months because I could just get by with the admin. (Filed two quarterlies with just the admin as UI!)
2. An ORM that is as natural as Django's. I love SQLAlchemy but its syntax is a huge turnoff. I imagine the "right solution" combines the best of both worlds; maybe some Django-like syntax wrapper around SQLAlchemy. … with typings.
What do you think, is there progress in those areas?
> A mix between Pydantic and SQLAlchemy, to use the same Pydantic model for both (I have some experiment in progress).
I'm not if it's a good idea. If app is more complex than a TODO you end up with a few way to serialise the same DB model. You have a few schemas to update User in the DB, serlize into the queue without password etc.
I think the idea is that FastAPI does one thing and does it well. You can mix in other parts of what you need, in the same way FastAPI itself leans on great projects.
SQLAlchemy + Alembic are standard because they're robust and well-know, but there are alternatives (particularly with async)[0][1][2]
For admin you can FE it with something like react-admin which has pluggable data providers, or api-admin[4]
Also a big django fan taking the plunge (we've started using FastAPI in production at work, and really love it!)
An admin interface would be wonderful, but given such a heavy API focus, I find Insomnia or Postman to work well enough for those interactions, so I haven't missed it all that much.
Seconding the SQLAlchemy friction though. Built-in ORM support would really help cut down the boilerplate required to wire FastAPI/Starlette models into the database. I think django devs feel the issue more acutely since defining FastAPI objects feels so familiar to making django models. But at the end, rather than having to wire up DRF serializers and viewsets, you need to wire up the database.
One other benefit to taking a more opinionated approach to database interactions: with a sensible-defaults-but-customizable user model, FastAPI could offer token based authentication/authorization out of the box by implementing something similar to Django+DRF+Djoser endpoints
Anyway, seconding the gratitude for this wonderful project, and really looking forward to using it more!
> I love SQLAlchemy but its syntax is a huge turnoff.
Out of curiosity, which part of the syntax? (Hopefully not all of it.) Personally I hate the declarative part (defining the model), but feel the query syntax is good, especially with some SQL knowledge (Django’s isn’t bad either but is too divorced to the SQL syntax to me).
Thanks a lot for your work! We recently switched to FastAPI from Django rest framework at Etebase[1] and it's been an absolute pleasure!
P.S, if anyone is interested, it took a bit of hacking, but we managed to get two unusual things working:
1. Use MsgPack instead of JSON. It works great and wasn't too much of an effort (the existing libraries are done wrong and don't work well).
2. Use the Django ORM with FastAPI. This was quite hard to do and quite manual as the Django ORM assumes the rest of the Django stack is in use.
Etebase is open source[2], so you can just check out how it's done if you are interested. I'm also happy to help if anyone has questions.
I really hope we get a good async ORM soon. I'm currently using SQLAlchemy 1.4b1 in a side project, though the typing (and mypy plugin) are unfortunately not there.
Yeah, I took a look at it but decided against betting on another new tech, especially in such a sensitive area where there are so many things that can go wrong!
I have had the same challenge with other python ORMs. I would love it if the django ORM were separated from the django project itself. That would be a hugely positive thing for the industry.
Same experience. Just had a new project a month ago and we decided to stick to the tried and tested sqlalchemy + alembic after spending a few days looking at the new crop
We didn't do it because of performance, we did it because of size. We deal almost entirely with encrypted data which means binary blobs. Msgpack lets us send binary data directly without the base64 overhead.
Thank you for FastAPI! Really well put together and I like how you’ve leveraged other great Python libraries.
Also had a good laugh when you posted about unrealistic job requirements awhile back [1] you know you’re doing something right when recruiters start asking for experience with your project.
Thank you, really appreciate your work! I think FastAPI is the first serious contender to Flask, which is saying a lot.
There is one thing though. It sounds small, but... Docs favicon is loaded from your webserver by default. I know telemetry helps, but not disclosing it and making it difficult to remove (believe me, it took me some time to do it) leaves a bad taste. Unnecessary, I would say, as the framework is impressive otherwise... I hope you reconsider this decision.
Hehe, there's no telemetry at all. It just points to the same docs' favicon (that is not even measured). Just to make a simple and easy to recognize UX. The same way JS and CSS for Swagger and ReDoc come from a JS CDN instead of forcing developers to figure out a way to serve those static assets.
Not sure we are talking about the same thing? The favicon is loaded from tiangolo.com server, not from the location of the docs? If there is no telemetry, then it's all the more important this got changed...
But anyway, last thing I want to do is argue - as I said, I really appreciate your work! Really big thank you! :)
There is NO reason? Not even that is where the FastAPI is hosted and is the developer’s website? You can’t think of any other examples of something linking to the product homepage?
Until this post, I
(1) was amazingly not aware of this
(2) have started a side project in Django/DRF
(3) am incredibly attracted by FastAPI's high-quality documentation
As it relates to the final point I would say the documentation has won me over so convincingly in just an evening's reading that I am planning to immediately rewrite the django rest framework codebase in FastAPI.
See my sibling comment[1]: I highly recommend switching to FastAPI, but you need to know that if you plan on still using the Django ORM you will need to do quite a bit of manual stiching in order to get it to work.
You mention "High perf" as the first claim. Are there any performance benchmarks in comparison to other frameworks? I only ask because I see FastAPI as #228 on techempower. https://www.techempower.com/benchmarks/
If you actually wanted a fair(er) comparison, you can filter your benchmarks by Python. People choose FastAPI if they want a Python web framework, there is little use in comparing it to Rust and C++ frameworks or whatever...
That said, this list looks pretty bogus anyway, many of the tools listed here are not really frameworks to begin with. uvicorn for example is an ASGI server (you can run a FastAPI application ON uvicorn) so I'm not sure how this benchmark is "comparing" uvicorn and FastAPI.
This list falls away pretty quickly when building a real app.
Common actions like calling another API or running a cpu or I/O intensive task for the user can change the performance profile of your app.
Good frameworks have tools for you when such needs arise and keep your app scalable.
One could write an app using the #1 on the TechEmpower list and end up with an unmaintainable mess. Maybe there should be a 'real app' benchmark that various languages/stacks implement to compare 'real world' scenerios.
I have been using fastapi for a project in the last months and it has been a delightful experience. I really love how the framework is designed, the great documentation and the built in support for Async/await.
I use it for my projects (and one project even makes enough revenue that I sponsor Sebastián’s work through GitHub sponsors at $250/mo) and it for sure has increased my development velocity.
It’s an absolute joy to work with. The automated interactive docs are great for iteration. Type hints as first class citizens is an awesome developer experience that I didn’t think I wanted.
Using the auto generated OpenAPI you can even generate client side code and typings to interact with your API.
It feels like flask but with lots of quality of life improvements.
I use the openapi-generator as well, but I write the spec first, then generate both client- and server-side models. Not the client itself so much; I don't like the client-side clients it generates much, and the server-side client is barebones at best. (Server is in Go, client is in Typescript)
FAST Api is a nice library, but it uses code-based api specifications. It's generates the spec from code. So you need to write your API code by hand first, and do serialization manually...
This is an important distinction and might not be what you want.
Using spec first API design makes more sense IMHO.
You first spec your API (Either in OpenAPI or GRPC or whatever Interface Definition Language you use). This allows Backend and Frontend developers to change the API definition.
From there you generate all type-save model objects, client and server stubs, and fill in the implementation with full type hinting.
At my current company, we wrote OpenAPI specs first, no need to know Python. And use https://openapi-generator.tech/ to generate both the Python server, python client and typescript client. We're looking into generate Kotlin and Swift clients too...
I'm a big proponent of spec first API design. It's much cheaper to agree on a spec first before you start coding.
But it depends on what you are doing.
Personally I like GRPC best, since the code generators are top notch.
First, stop writing interface code for your APIs by hand.
Second, stop writing interface code for your APIs by hand.
Third, stop writing interface code for your APIs by hand.
We're living in 2021... Every time I see somebody manually mapping an objects to a JSON dictionary I'm crying in the inside. It's like writing machine code by hand.
I agree with this in principle but in practice specifications seem a quite hard to write and maintain because you have to learn an obscure syntax for a new language that nobody on the team knows. So as a devils advocate question: what is the advantage of writing your spec in a spec language vs having it written in a (sufficiently well typed) language and generating the spec from that?
Especially when you have one dominant language that most of your code is interfacing with, having the the interface be hand written with idiomatic code for that language is a lot nicer than everyone having to work with auto-generated code that is often not as nice.
Just to further this point. In my group we use several languages (python, java, node) for various services. It is extremely useful to define our spec via openapi and then generate servers/clients that conform to this. We use swaggerhub and their code gen and it's decent for multiple languages.
I agree with API-spec first approach, yet I still love FastAPI. I try to get first iteration written in Markdown so we can change it easily. Once it is decided upon, FastAPI lets you build the actual implementation easily. It is priceless that the docs match the implementation by default.
Possibly, but I think that Python code generation is kind of pointless. CG makes sense with compiled languages, but in Python everything can be constructed on the fly at runtime, without any loss of performance.
But there is code, and it can be type hinted and checked just as easily. First of all, any constructed classes usually inherit from some parent class, which can be checked for easily.
Second, types can be constructed on the fly as well. Agreed, they will not show up in an IDE, but a) they can (and should!) be unit tested, and b) while I do value editor type-checking I value my time, and streamlined process (i.e. not having to have the extra step of code generation each time I change the spec) much more.
And finally, if you version both spec and the generated code you run into a danger of them diverging without realising. If you absolutely must have the code generation, make sure not to commit the generated code, and rebuild each time you change the spec.
Though we do not generate the entire server code, we do follow a spec first design paradigm for APIs as well as DB. We have adopted hexagonal architecture for our web framework and therefore, code generation is limited to fit in it. We use Starlette and SQLAlchemy.
I recently used Starlette, which FastAPI also uses, for a project and absolutely loved it. I went with Starlette over FastAPI because I didn't need a lot of what FastAPI provided for that particular project.
I want to start nudging my coworkers into using FastAPI over Flask because some of the async APIs they've started using. FastAPI seems to come with all the things they want out of the box instead of adding a bunch of individual Flask packages.
Just for my own understanding, where would FastAPI be appropriate vs. Starlette and vice versa.
The big additions of FastAPI is their pydantic integration which allows for input validation, and their openapi.json generation, which allows for interactive API docs of your web app out of the box.
Additionally they provide docs and infrastructure for some extras you might want like authentication and testing.
I too recently started using Starlette for https://sqwok.im and had investigated FastAPI & Responder.
Both look promising but I've found that Starlette on it's own is very lightweight, well-designed, and straightforward to work with (coming from many years of flask/django).
Some nice features of FastAPI are the automatic response type handling, e.g. "return {'json': 'yes'}" vs "return JSONResponse({})", and the flask-like route decorators. I ended up building the former manually into my project.
Both Starlette and FastAPI are worth checking out, especially if you've spent time building python web apis with django or flask in the past.
I may have done something wrong, but I remember that when I would do 'return {"json": "yes"}', it would not set the content-type to json. When I discovered this, I had to go back through a bunch of code and convert it to either fastaspi.Response or JSONResponse.
Creating the Response instance proved to be a better approach, because it makes it much easier to set content type, status codes, cookies, headers, etc.
Again - I might have done something wrong, but if you have an instance just doing 'return {"key": "value"}', check it with curl or browser dev tools to see what the Content-Type is.
I've been using FastAPI for some time, and now I'm using it as a full web framework (not just for REST APIs). I like writing SQL without ORMs, so the combination of aiosql[0] + FastAPI + Jinja2 works great. Add HTMX[1] and even interactive websites become easy.
That's in fact the stack I am using to build https://drwn.io/ and I couldn't enjoy it more.
great, that’s exactly the approach i was looking for to build a web based app without js, only python and html. can you recommend a resource for learning htmx?
I'm not really a backend developer but needed to put together a pretty simple backend recently for a client project (mobile app). FastAPI is really productive in all the areas you'd normally reach for Flask. I used SQLAlachemy as an ORM with Postgres and PostGIS. I had to write a bit of custom serialization stuff with Shapely to fit PostGIS in but otherwise everything was pretty much taken care of with Pydantic. It's a good level of magic/abstraction in that you can rip pieces out easily and replace them if you need to, but if you just follow the beaten path everything works. OpenAPI spec and docs is also really nice to have built-in.
We're using FastAPI in production at InvestSuite. Highly productive framework, very well written documentation. You get jump started right away.
Only caveat with async python. Might be tricky when using 'sync' libraries. It's not always straightforward and you'll find yourself wondering why your server is blocked from time to time. It's not a problem of FastAPI, but you need to be aware that if you do a blocking call (function is not prefixed with 'async') to a db that it blocks the event loop.
I'm glad FastAPI is useful! And thanks to InvestSuite for being one of the FastAPI gold sponsors.
If you are having problems, you can ask in GitHub issues.
But for the async stuff, a simple rule of thumb is to always use normal def functions and blocking (non async) libraries, that way FastAPI will do the right thing and make sure to run it in a threadpool (thanks to Starlette, the underlying library).
And for the specific path operations (endpoints) where you need to optimize performance, then you can use async and carefully choose async libraries, or run the blocking code with run_in_threadpool, but you can leave those details and possible extra complexity for the cases that actually need the extra performance or async support.
We use fastapi in production too, but the problem we faced with sync stuff in combination with sqlalchemy was that the sessions (which we inject using Depends) were created before all the actual sync functions were executed, so the connection pool ran dry and everything became unresponsive. With flask I had a better experience because it creates the session in the same thread as the function that will handle the request. If you overload it a bit (say, 100 concurrent requests with a connection pool of 30) all the Depends calls will block because there are no threads left in the pool to actually handle the requests.
I understand that fastapi is more suited for async stuff for which it truly works great, but it would be nice if there was a idiomatic solution within fastapi and/or starlette that prevents these kind of problems.
Yes and it's made worse by the fact that there's no way to get the raw body of the request in a synchronous endpoint.
I also ran into some really bad validation / serialization performance degradations for large response bodies. Serializing responses with a few 100 small objects or neural network embeddings would take a function that takes 7ms and blow it up to 100-200ms.
My understanding was that if you write a regular function (`def` rather than `async def`) then FastAPI (or really Starlette which it uses under the hood) executes the function in a thread pool so that no blocking of the main event loop should occur.
Yes. But this is a very basic example. When you have an async function defined with `await` statements in it and later on in the function you do a call to a `blocking` function you need to be aware that you have to run in the threadpool.
You don't always know that a function call is blocking, because you don't always know what is happening behind the scenes of that function and on what it depends.
what is the benefit of threadpool though? am I understanding it correctly that due to GIL, python will just keep switching the threads, so instead of running A then B both at 100% speed, both will run concurrently at 50% speed (+/- overhead)?
Only if you're CPU bound, but usually your webserver is blocking on disk IO or database calls or whatever, not calculating stuff, in which case the GIL doesn't matter.
I really like FastAPI. The "fast" in the name means fast development. Using Python types to create endpoints and get auto-generated docs is a joy.
If you are looking for examples, feel free to check out mine with websockets [0] or how I run FastAPI with systemd and nginx [1] in production (I run several side projects on FastAPI).
I migrated a side project from Flask to FastAPI recently. It's deployed to GCP Cloud Run with a Cloud SQL database, and integrated with Auth0 and Stripe. I use tiangolo's uvicorn-gunicorn-fastapi base image.
It's been really nice for me. As easy as flask, plenty of docs for real world applications, and the performance is great. The emphasis on type systems in the docs was ideal for me.
I was deciding between this and DRF after wanting something more opinionated than just Flask, and I'm really happy with my decision so far.
Similarly nothing but praise and admiration for FastAPI, having also migrated over a couple of Flask projects. A couple of additional plus points to above:
1. Integration with jinja2 , meaning I was able to use all my view templates virtually untouched.
2. uvicorn supported out the box, meaning there's a production-grade web server ready to go. It'll need some work for very high volumes I'd imagine - but for low-medium load (c.10s-100s req/sec) it's working just fine for me.
Flask is clear that the built-in server is not for prod use. It's not a show-stopper in the great scheme of things. But it's just one less thing to worry about with FastAPI. Building a docker image is trivially easy, so lots of hosting options.
regarding the uvicorn-gunicorn-fastapi image - is it really a good idea? you'll likely gain some other dependencies as the project grows, and having some dependencies in external docker image (the tiangolo's one pip installs fastapi + stuff), and some installed by your own means sounds like a lot of trouble headache quite soon... that's why I personally went with just python image, and installing everything from pipenv
Have there been any benchmarks done on the websocket side of FastAPI specifically compared to flask-socketio? Especially when scaling horizontally and needing to synchronize across many socket servers?
We're building out a product that will maintain large numbers of simultaneously connected sockets, so the "performance" pitch here is pretty compelling.
I would love to know what HN's python folks are using for new projects these days.
I've looked at FastAPI before and it has some nice features like asyncio and pydantic integrations. But it doesn't have any official users library, and ideally a framework should have that available.
I've started looking outside of the Python ecosystem since it feels like there's nothing new and exciting come out of there. I've played around a bit with NestJS and it's amazing how far the TypeScript ecosystem has come.
It's also crazy that SQLAlchemy doesn't have typing support still, so TypeORM looks really appealing.
I still use Django to this day, both for monolithic SSRs and REST APIs. All the batteries it comes with are perfect for the common use case, and the small "bubble" around it (ex: DRF) is as good as it gets IMO.
The tooling around REST APIs are lacking compared to Flask and FastAPI, but wrangling what's available (drf-yasg, oauth2lib) to suit my needs is better than migrating to another framework I think.
Same here. When I want to get a new project of the ground, I use Django.
This partially due to the batteries included, partially because I am very familiar with it.
On more mature projects, at the stage where I really start splitting services into their own hosts, I switch to something else. Django's batteries are no longer useful at this stage, since e.g. auth is it's own service now.
I also looked at NestJS before creating FastAPI, and it certainly inspired ideas in FastAPI (also TypeScript in general), you can read more about that here: https://fastapi.tiangolo.com/alternatives/
About ORM with types, yeah, I also want an ORM based on standard type annotations. So I started playing with mixing SQLAlchemy and Pydantic, and I have some unfinished lab experiment with it.
I haven't found the time to finish it, but hopefully I'll find it soon.
I still happily use Flask with mostly Jinja rendered templates.
With things like Hotwire and htmx you can build very nice feeling apps without going all-in with an API back-end and JS front-end.
I released https://github.com/nickjj/docker-flask-example as a starter kit for what I use in new projects. It wires up things like SQLAlchemy, Celery, Flask, gunicorn, Webpack, etc. with Docker. It also comes with CLI extensions for digesting / md5 tagging assets with optional CDN support using Flask-Static-Digest, database migrations / seeding with Flask-DB and more.
If I ever need to create API endpoints I combine Flask-Classful (routing), Marshmallow (serialization / validation) and Flask-JWT-Extended (authentication). I've written about this combo at: https://nickjanetakis.com/blog/flask-libraries-for-building-...
I've been using FastAPI for a while, and think that the API-driven approach that it endorses is a good paradigm for building web applications.
Since the design of FastAPI programs is often modular, with libraries and modules extending the API with additional routes, it seems to make sense to build front-end components according to the same module structure. I'm yet to see any of these for popular libraries, but I'd be up for helping build some in libraries like Vue.
Does anyone know of existing projects/server frameworks where backend modules are coupled with front-end components that consume their API?
Although I'm currently using React with TypeScript and hooks, and it's a great development experience. I plan on adding it to the project generator later.
Where I think this could be taken to the next level of reusability is in modularising the front-end into API-specific components. For example, the login behaviour could depend on FastAPI-Users, with a sibling frontend library containing components that implement the same login flow. Adding user behaviour is then a matter of using the same third-party library on the front and back end.
Actually I also have plans to work on a generic admin UI based on OpenAPI, so it would be independent of any DB or ORM, just based on the defined Pydantic models.
But that's also gonna be for whenever I get the time to invest in it :)
Another thumb up from me for whatever it's worth. Thanks to FastAPI, I was able to create an API for the community with very little experience, in turn enabling a bunch of downstream apps to be made: a bunch of Discord bots, a few calculators, a few websites.
For the front-end, we have a React/Typescript website that is quite easy to extend. The original programmer did a great job organizing the site but I do think FastAPI and React/Typescript helped us along the way.
We have been migrating a project from DRF to FastAPI, and its a massive performance improvement. Which was what we were needing. Unfortunately our data was just not well handled by DRF, and serialization from PG > Python > JSON was absolute horrendous. So we love the speed we are getting with FastAPI.
Now you do lose a lot of things. There is no more Django ORM, so we are back to using SQLAlchemy. I don't think this is bad, but definitely different.
You also have to write a lot more specs, and such. Which can be annoying, but not too bad.
It also encouraged us to go over to type hints on the whole code base and has save from a few bugs as well.
Overall we really like it, not sure I will go back to DRF on any future big projects. Still handy for something quick, but not what I want to be using moving forward.
Our performance gains have primarily been in serialization, so response times are way better. It used to take multiple seconds for some things to serialize in DRF, and down to sub seconds. Its like 4x faster in that regard.
Yeah, meant Pydantic Models, SQL Alchemy models, and the actual endpoints. Its a lot of repetition in places. But the finer grained controls have been good for us so far. You just have to write the models, and don't need to worry as much about input validation in your own code. Which has been great.
fastapi is very similar to what you’d get from flask with an API spec tool like flast-restx or flask-marshmallow + incubation time to build your own components & patterns around it + fastapi is faster.
So if you are committed to the component tools you already built with flask add ons, stick to flask unless you need the speedup. Otherwise if the project is new or you’re otherwise uncommitted to your existing component tools, switch to fastapi.
Fastapi is a wonderful developer experience, but needs some work around the production/performance side of things. Which is sorely missing as of right now.
I’ve used FastAPI at multiple companies now and really enjoy working with it. The tight integration with pydantic is wonderful. Also, it really is quite fast out of the box. Great for building ML prediction APIs.
I built a real time trivia game in FastAPI. I liked everything about using it except for the websockets. It leans on other libraries and I has to figure out my own redis integration. In the end it was only about a day of delay so no biggie. The library is great, but I can’t speak as to its scalability as the other party didn’t want to pay.
Huge FastAPI fan here. I have used FastAPI and friends (tortoise ORM, fastAPI-sqlalchemy and arq) in my recent projects and it has been the most enjoyable web services writing experience in my life. I never had thought python optional typing had any use before I used fastapi.
I looked at FastAPI while back for an API for a machine learning inference service (and you'd see a lot of blog/medium posts for that).
The syntax and documentation is really good and the docker examples are great for fast startup.
The only problem is that because FastAPI isn't process based some machine learning libraries (Tensorflow for example) don't really play nice and you'll get random errors or funky results, especially under load.
Obviously one solution is to use Tensorflow severing (and deal with that craziness) and let the FastAPI do the routing/data processing but I bet there are production machine learning products using FastAPI spawning random numbers and the developers are oblivious to the problem.
Ideally ML tasks would get executed by background tasks e.g. Celery and use polling or a non-blocking event loop. It's generally not preferred to run such intensive process work inside the context of a web request.
It really depends on your API clients and your latency.
Celery is fine for long running or batching, the queue and the fact you need to store and retrieve the results from somewhere else isn't really ideal.
Non blocking events loops (and reactive requests) really like small tasks and even scream at you if you are above their thresholds.
Honestly I see a bright future for apache arrow flight but I think It's a bit immature right now.
But my main points is that 80% of the blog posts about FastAPI are for machine learning and people will just copy the code, run one or 2 requests see that it's fine and move along...
I also ran into some performance issues with model serving using FastAPI and ended up using Ray Serve to handle properly distributing the workload and batching requests. With a bit of work I was able to 10x the throughput and cut down response time in half.
Ray uses apache arrow plasma store to avoid the copy and serialization costs that usually come with multiprocessing in python.
Thanks, I looked at this way back before v1 and had a lot of issues. I guess I need to retest it.
Most of the models I use can be converted to a more optimized formats for inference like https://treelite.readthedocs.io/en/latest/ so the code and interfaces are pretty similar and it makes the architecture less complex to update and deploy.
To be honest it's still kind of a mess. I started out with a FastAPI service that wasn't performant enough and figured that putting ray serve behind it would take a day but it turned out to be a pain in the ass. The docs are really lacking and they're transitioning from flask to starlette in 1.2 (which is not out yet) so a lot of the information is wrong or misleading. I ran into a ton of random serialization issues like simple dataclasses holding a torch tensor and some plain metadata getting pickled and copied, and pydantic models failing to serialize and crashing the whole thing with assertion errors (broken recently in their nighly releases, which are needed to use 1.2).
Depending on string names and handles is also very hacky and completely cripples PyCharm autocomplete / type checking.
The core of the project is great though. We're working on a video processing system that has a ton of downstream models and using Ray makes it possible to get away with running inference in pytorch without going down the TensorRT/inference engine rabbit hole (would be torture for some of our detection and sequential models).
I've just put together a couple of APIs, pretty simple things, but FASTApi made the first one (a bunch of selects) go together super quick (except for matching a legacy query parameter structure that is only questionably legal), and the second one (call outs to subprocess.check_foo) was going well until I tried to access a raw post body from a sync context. As far as I can tell, it's just not possible with Starlette, so after spending as much time as I spent on the rest of the API, I ripped out FASTApi and dropped in flask.
There's a lot more messing with parameters, and it's not as clean, but it does work without dropping subprocess.
Yeah I also just ran into that issue a week ago. Had to look for awful hacks to try to get around it, ended up using Ray Serve to offload the processing to a background worker.
I'd love to give this a go - we use Django/DRF extensively and we love it, however sometimes serialization performance is awful. Are there any speed ups compared to Django in serialization speed?
I love FastAPI. It's one of the best documented project I have seen. We're running several latency-critical production workloads in cloud with absolutely no issues. Great work Tiangolo!
Actually just started on a new project with fast api last week. Really liking it!
I think some of the other frameworks I’ve worked with in the past have too much of their own abstraction built in to the point that it’s difficult to understand the fundamentals of how the application is working. Fast API excludes unnecessary abstractions, makes it easy to use outside libraries, and feels a lot more intuitive. Less learning the docs and more coding.
We have been tinkering with Falcon. My sense is that Falcon may be better if your server is doing computations rather than just serving database requests. Am I thinking about this correctly, or can FastAPI work well for compute heavy jobs without too much tinkering (i.e. forking the main thread)?
I picked FastAPI when I started a large project last year, when I was looking for something a little nicer than Flask for just an API server.
FastAPI feels like a damn cheat code, it's so well designed with the Pydantic models as input validation, the flexibility, the built in OpenAPI docs, on and on.
I want to use redis to cache response output and large database retrievals with FastAPI. But there isn't any documentation on how to do that or any googleable resources
Is this a good choice for realtime websocket projects with many open connections? I found only one simple example in the docs and was not sure, if it will eat too much memory with many clients?
Nice surprise to find this shared on HN!
It's also great to see so many products/projects and companies using it successfuly in production!
I see a bunch of questions related to "how FastAPI compares to X", FastAPI was built from the learnings from other awesome tools, and is built on top of great packages. You can read a lot more about it here: https://fastapi.tiangolo.com/alternatives/
If you have questions or problems, you can ask in GitHub issues: https://github.com/tiangolo/fastapi/issues/new/choose
There's also an official Discord chat: https://discord.gg/VQjSZaeJmf
And finally, if you use FastAPI, I would love your input in the first user survey (you could win stickers ): https://tripetto.app/run/RXZ6OLDBXX?s=hn