Take a PostgreSQL database, point Hasura at it, get a GraphQL endpoint that you can query for data with a rich set of instructions. I'm sure a flurry of questions spring up, what about auth for accessing data? Permissions for all the CRUD needs? What about my business logic? All of that will be covered.
When I started using GraphQL I didn't get it. I didn't get very far into using GraphQL before I found Hasura and was amazed. For all intents and purposes it's the only GraphQL flavor I've realistically used. So everything I write do take with a grain of salt. I won't really focus on GraphQL though, my focus will be on all of the benefits it provides to developers, teams, and beyond.
Rigid, predictable structure for the key data. Flexibility for the one off bullshit for some weird feature I'm not convinced we should be building. My choice of requests and real time.
Quick access to all data, and my specific needs without 40 requests so my 3G connected folks aren't watching spinners. Finally I don't want to be stuck with an impossible hole to dig myself out of.
I'll preface this with I've historically been a front end focused developer. Not that I never did backend but infrastructure wasn't my forte. As I became a better developer and learned from others that were more focused on backend/devops there is always a trade off when choosing a specific technology. No matter front end, back end, and even infrastructure there are trade offs.
When picking a new technology to add to the stack you find yourself asking a few things. Am I going to be stuck with this? A broad question but could be anything related to vendor lock in, being stuck with a solution because it's too difficult to migrate away after building upon it. That brings up the question, how difficult is it to migrate away? Can it be done slowly over time, is it a big rewrite, or somewhere in between.
The classic loaded question that gets tossed out every time: "Is this going to scale?". A valid question if you apply some constraints. Will my response times be acceptable as complexity grows? Is the system resilient to infrastructure changes? Will this scale when we have 50 devs working on it versus 2?
Easy local development is essential. It's so frustrating to show up at a new job, get handed a Readme and spend a week trying to get a system setup. Throw in having to now go setup specific accounts on black box cloud providers and you'll be questioning every past decision of anyone that has worked on this code base.
There are infinite more questions but I find people reach for the solution that they know doesn't scale and push the pain down the road. I don't think that's a bad idea, but there is a better solution.
Before I dive into why I'm a big fan of Hasura lets take a look at some "competitors" in the eco-system.
Anytime I choose a piece of technology I ask "what's under the hood". In the case of Firebase or Fauna it's a black box. There is an unknown amount of magic. Selecting them is putting a lot of trust in something entirely unknown.
Firebase/Fauna or both hosted solutions and with that black box you're now officially vendor locked. Scaling, transitioning away, or solving issues is now a time suck of re-engineering what is going on. Not to mention you're ability to "scale" is now at the mercy of how big your wallet is to pay for these solutions. In some cases it might be worth it.
Firebases auth role system is complex, to the point that security and exposing users data is a major concern. Firebase Auth works great with Firebase, but integrating external auth systems seemed like more work than it was worth. However I do not have much experience with either of these besides the few times interacting with existing systems using them.
My biggest gripe with Firebase is how much denormalized data needs to be stored just to be able to build out desired UI/functionality. Beyond that the only real way of interacting with Firebase is with Firebase libraries. You've now removed all ability to optimize your front end and are at the mercy of the solutions Firebase devs choose.
The nice parts of Firebase is real time. Everything is real time. That's both good and bad. This forces you into a very specific way of developing your apps.
Postgres is a known. What I mean is answering all the questions above "does it scale", "how do we migrate away", "how do we solve XYZ issue", and even "how do we do local development with it" are all answered. With every update Postgres gets better in the case of performance, features, and security.
Postgres extensions solve so many issues without having to reach for other solutions. PostGIS, I can now query based upon radius of coordinates and build out some real amazing systems. Postgres Trigram (pg_trgm), you now have fuzzy full text searching.
I'm sure there are issues with Postgres, I've seen complaints. But a majority of those crop up at planetary scale. Even then there are solutions you can swap to one I know of is YugaByte, and it's PostgreSQL compatible.
The take away is that Postgres is a known entity. There is no vendor locking in, it scales, local development is a solved. It's just SQL.
Hasura takes the known entity that is Postgres and turns it into a magic GraphQL end point locked down by default. Postgres and GraphQL are both pretty known entities. GraphQL less so of a known entity but it's gaining popularity.
It's open source and written in Haskell. Do I know Haskell, hell no, but in the event Hasura were to ever end, development could still continue.
Lets start with the initial questions we posed. Are you stuck with it? Will it scale? Local Development? How difficult is migrating away?
Short answer: not totally. Because all of your data is stored in Postgres migrating away, being stuck with Hasura, or anything is migrating back to write your own SQL queries, or using an ORM. You will need to take care of permissions yourself now for data access.
Hasura can scale horizontally. It does hold onto some state for caching queries and data but if you need to handle more requests. Throwing up more Hasura instances will result in relatively instant scaling. Migrations scale well as more developers are added. Metadata scales well, the only issue I see is modified permissions on the same table and having to manage merge conflicts. However in my experience rarely do permissions get modified to provide more access once they're set.
Local development can be wrapped up in a
docker-compose for one command setup for getting started developing.
Database migrations are first class and are auto-generated when using their console UI, creating both up an down SQL statements so you can roll forward and backwards.
All metadata related to permissions is stored as separate YAML files. Reviewing added permissions is git diffable and easier to review for exactly what permissions a dev might be adding.
Auth can be JWT based on a secret key for basics. It can utilize JWKS for when using many of the popular third-party auth systems like Auth0. Additionally for very complex auth situations each request can be run through an auth hook. A webhook that is invoked to check the token and return values to use for permissions in Hasura. This can be roles, user ids, allowed ids, etc. There is always an escape hatch.
Because Hasura understands your JWT / Auth it adds a level of power to their permission system. You have access to a huge array of options, including
ors, or checking if something exists in another table. A prime example is allowing a user to
update a field. Put the
user-id in the JWT, and do a does the
x-hasura-user-id inside of the the JWT.
Then select the field they are allowed to update.
Beyond comparing values inside of the JWT you can also compare values in other tables to values in the table you're setting up permissions in.
This enables all of the basic CRUD you would need to do and even then allows you to dive into advanced CRUD. Like more advanced permission checking. Imagine having a table of
updating a specific company the user has been granted access to.
This might take significant backend code but a combination of checking if the users
id exists in
users_permissions where a permission of
companyId matches the company attempting to update then allow it. Otherwise don't allow it.
I'm usually against generated code but in the case of GraphQL and TypeScript it makes development a breeze. I currently utilize but there are some other new auto-query builders with just Graphql Schema coming out. Including and .
My flow currently includes writing queries in specific directories. I have the code gen pointed at Hasura GraphQL endpoint, we can pass back
x-hasura-role in the request for schema and grab per-role schema, and thus per-role generated queries. So queries for
This generates TypeScript, Apollo hooks, and even backend requests. With TypeScript it all now auto complete queries, mutations, data returned and more.
If you have specified a value that you did not grant permissions inside of Hasura for the specific role, the generation will fail. So before you ship to production you can verify that all of the specific roles have access to the correct data.
All the types from the database flow all the way through. Postgres field types => GraphQL types => TypeScript types. No more writing types for your data!
Once you hit limits of permissions inside of Hasura there are 2 ways to extend with custom business logic.
The first is Actions. These are webhooks that get invoked but schema is included in your metadata. So when the query or mutation is invoked via GraphQL it then invokes the webhook (lambda, express route, any serverless function whatever) and then you should return in the promised shape of the GraphQl request output.
One benefit of actions is you can setup relationships to existing tables in the Data Base and permissions are respected. For example rather than returning all appointment data if I return
appointmentId from an action. I can relate that to the
appointments table. Now you can query for additional appointments and be secure in knowing your permissions prevent the user form seeing appointments they don't own (if you've set your permissions up that way of course).
Async actions add a new layer on top of actions. Rather than a single query/mutation the GraphQL generated is 2 calls. One for the query/mutation, and one for a subscription. The purpose is for long running tasks that will either fail or succeed. When the request is made it returns an id, then you subscribe to that id, and it will respond with the data once resolved. This is an amazing feature for when you're hitting external services with unknown response timing.
Remote Schema is a powerful concept that stitch external GraphQL schema into the top level query root. So if you maintain your own existing GraphQL service, need to add business logic, or third party CMS services that provide a GraphQL endpoint you can add them directly to Hasura.
With Hasura all of your queries, mutations, and remote data are all stitched together as a handy sort of service discovery way.
Storing external data in your database usually means having to reach out to an external service to grab that data. This is now a baked in concept of Hasura called Remote Joins. You can point any field, to a query returned by your remote schema. This means it will stitch in the schema right into the request along side the queried value.
One case used heavily is signed URLs. You can create a field that points at remote schema and passes the ID along for the file. Then generates and returns a signed URL for an image.
This works for anything. You might store a reference to your external auth system and then query for the users information. So rather than syncing stuff across your system and the external system you always reach out to the external system and it's stitched in.
Very similar to the setup for permissions you can leverage similar constraints for the role based access control you can setup complex groupings of
offset at every level including the related data that through foreign keys or through hasura relationships.
Websockets are built in, and are built right on top of all the auth and permissions. Any specific complex query setup to access data from your database will be respected.
Anytime anything changes that is represented by your specific or non-specific you will get updated data.
With the Schedule Tasks and CRON you can invoke any webhook with arbitrary payloads of data. Whether a one-off task on a given date and time, or classic CRON invoking at set intervals.
This is great for scheduling emails, text messages, push notifications, or whatever else you need. You can setup all of this without any external systems.
Event triggers respond when a specified table, as well as optionally a field is updated. The webhook is invoked with the new and old data. This allows you to separate out small pieces of code to be reactive to changes regardless of what path the code takes.
Not only do these webhooks get invoked when the database updates you additionally get information about who triggered the update. Including the role, and all session variables associated.
This allows you to setup whatever necessary complex logic you need based upon who has updated specific data.
The first response I get from most people is "Oh I can't just NPM install it". There are many reasons why this has to be a separate running service, but this does create a huge barrier for many front end people attempting to test out Hasura.
Currently only supports Postgres, however MYSQL support is in development, and if you can convert to one SQL structure you can convert to anything else. This is less of a con for me personally as I always reach for Postgres, but for others that can only use MYSQL it is a con.
When starting up if it can't reach the Remote Schema you'll run into some inconsistent metadata issues since it will get the schema from the remote server. This error when starting up can be scary. I the case of their cloud you can't even view the instance if the remote schema isn't reachable.
If the remote schema changes you have to hit an endpoint on the Hasura instance to reload the remote schema. I don't know the fix for this. I just find it a little frustrating.
Support for relay was released as alpha, however it currently doesn't work with other parts of the Hasura system like actions or seemingly remote schema.
Sometimes stuff they release is 99% amazing and 1% "if you only went a little further it'd be so much better". For example, their actions don't allow you to return nested objects which is frustrating when you are attempting to migrate away from Remote Schemas (so you don't have to reload constantly).
Also Remote relationships only work with remote schema and not actions. It would be nice to be able to join a piece of data and point it at an action.
If you're defining a role with a permissions check, the columns you allow are the columns that will always be allowed. It would be nice for certain column permissions. Allowing you to control if a certain field can be updated based upon a separate condition for that role.
Generally in SQL databases you'll have one-to-one, one-to-many or many-to-many relationships with your data. When doing insertions the ability to do one mutation as a transaction is impossible.
You first have to insert the data to get IDs, then insert in the relationship tables.
They can be slow to release announced features. But their Parse Don't Validate refactor will allow them to release stuff faster. It took them over a year after announcing remote relationships to release remote relationships.
Async Actions are great for slow execution but if your Lambda times out after 30 seconds, your job better take less than 30 or it'll never complete. Not an issue with Hasura, just an issue with the ecosystem pushing serverless.
Managing secret keys is annoying when dealing with items not in private VPC. Have to verify every request against a shared secret.
I really want to be able to group by stuff but in order to do that you need spin up separate Postgres views. This isn't that big of an issue but it's definitely not as flexible as I would like it to be.
Not only does this generate queries for many different situations including Apollo, Urql, graphql-request it also generates all the types. So my data, queries, and mutations are all typed specifically for a users role which is incredibly poweful.
Additionally unlike many other solutions I find myself not needing to do any sort of data denormalization because the
N+1 issue, and ability to arbitrarily connect disparate data with Hasura is easy.
Besides the ones I mentioned before (Firebase/Fauna) there are other alternatives. seems like a popular library I've seen people reach for, however I have no experience with it. Also , I put it here but I feel like as of v2 it's less of a direct alternative to Hasura. It has migrated to the stance of just creating the SQL queries and less on the "whole picture" like Hasura.
As I said before I found so much quick success that I never did explore these other options. Now comparing the amount of code I have to write with competitors to accomplish the same with Hasura is off putting.
Regardless of the cons I mentioned I'm still infinitely bullish on Hasura. The pace at which I can move as a developer, the value I can provide to my company all without compromise is next level.
Join our community and get help with React, React Native, and all web technologies. Even recommend tutorials, and content you want to see.