Jump to Content
API Management

Designing and managing APIs: Best practices & common pitfalls

December 13, 2018
Martin Nally

Software Developer and API designer, Apigee

The job of an API is to make the application developer as successful as possible. When crafting APIs, the primary design principle should be to maximize application developer productivity and promote adoption.

So what are the design principles that help optimize developer productivity? I recently presented some ideas about this in a webcast (you can watch the replay here).

I was pleasantly surprised by the discussion that my talk sparked; many interesting questions were asked, so I thought I’d share some of them (and my attempts to answer) here.

(Editor's note: Some questions were edited for clarity)

How does HATEOAS fit into pure “HTTP” APIs?

I have seen different interpretations of what HATEOAS means in the context of APIs. One interpretation leads to the practice of trying to describe all the actions that can be performed on a resource in the representation of the resource. For example, if I did a GET on http://etailer.com/orders/1234567, then, in addition to describing the order itself, the JSON that I got back would try to describe all the actions I could perform on that order (cancel the order, re-order the goods, track the shipment, ...).

This is not commonly done, and I myself do not design APIs that work this way. The JSON I design only describes the order itself, including any relationships it has to other entities expressed as URLs. I make the assumption that it is the job of the client code to know what actions it wants to perform and how to perform them [using standard HTTP methods, of course]. This is how most clients are written in practice. Even the modern browser works this way, since operations are now usually coded in Javascript executed in the client rather than using old-school HTML forms prepared on the server.

Does this mean that I am violating the HATEOAS constraint of REST? I'm not sure, but I don't see a reason to worry about it. I don't make any claims for my APIs regarding REST compliance; I simply try to use HTTP as simply and directly as I can and avoid all invention where HTTP already specifies a solution (which in my experience it usually does).

Can you speak about API layering (experience APIs vs others)? How do you avoid "spaghetti APIs" over time?

I'm not fond of layering in general, although I recognize that you sometimes need to do it. It is common for companies to have some sort of "generic" API for their problem domain, and end up layering other APIs on top of it. For example, assume I'm in retail and I have APIs for catalog and orders. The mobile app team looks at my API and decides they don't like it for mobile development, so they put another server in front of the generic one that implements its own API and delegates onto the generic one.

So now which of the two APIs should others use? Once a few teams have done the same thing, it's not clear anymore which is the real API, if any. Some people make a virtue out of this (hence the concept of experience APIs), but I like to minimize layering. If the mobile team needs function that the generic API does not have, they can extend the generic API (possibly in their own server), but ideally they should not create a new layer on top.

How can one indicate specific error conditions when no HTTP response code is a good fit, or is not fine-grained enough?

I pick the HTTP response that is the closest fit, and also return a body with more information. I don't know of an accepted standard for the body format, but standards have been proposed, e.g. https://tools.ietf.org/html/rfc7807

How do we manage resource mapping and up to what depth it's good to go—like /users/{Id}/orders/{oid}/articles?

What you are doing here is inventing a query language. This query says something like "SELECT * FROM articles, orders, users WHERE user.id = $1 AND order.userID = user.id AND article.orderID =$2". This query is not optimal if {oids} are unique across all orders, because then the query can be reduced to just /orders/{oid}/articles. Designing your own query language is hard, which is one of the reasons that GraphQL has got attention.

Personally, I don't like the idea of encoding queries in the path portion of a URL rather than in the query string, because it encourages people to confuse query with identity lookups. But many people do what you are doing, so I won't claim it is wrong. I also used to do it before I had a change of heart.

For POST, can you speak to the pluses / minuses of query string and JSON body?

Putting queries in a query string and using GET to evaluate the query is attractive and a good fit for HTTP. An example is GET /pets?{your query here}. Unfortunately there is a practical limit on the size of a URL: if you go above about 4k characters you run the risk that some proxy in the chain between a client and the server will mess up the request or reject it. Because of this, I always offer both GET and POST options for the same query.

Would it be a generally decent design if we just expose a single API point (e.g. me.com/api) for all communication and use JSON as the payload?

Endpoint is not an HTTP concept, but it is fine if all your well-known URLs (and even all the dynamically-allocated ones) begin with the prefix me.com/api. The important thing is that every URL should identify some resource. If you have any URL for which you could not easily answer the question "what resource does this URL identify, and therefore what would it look like if I performed a GET on it", then you are probably not working within the HTTP model.

I understand the limit on linkability rationale for eliminating a version number in a URL, but what would your strategy be then for handling version differences?

See this blog post on versioning.

How important is it to strive for the consistency of the API design in terms of resource/entity planning, error message standards, header extensions, etc?

There is a nice quote from Fred Brooks on this:

“Blaauw and I believe that consistency underlies all principles. A good architecture is consistent in the sense that, given a partial knowledge of the system, one can predict the remainder.” - Frederick P. Brooks, Jr., The Design of Design: Essays from a Computer Scientist, 2010.

In other words, consistency is paramount. The easiest way to get consistency is to just use HTTP without adornment or invention. Where HTTP does not provide answers (this is less common than many people think), try to pick one solution and stick with it.

Is there any concept of statuses in RESTful APIs (viz. draft, dev, test, released, obsolete)? How do you implement this lifecycle of statuses? Is there any documentation on this?

HTTP does not address this—HTTP would view this as part of the modelling of your problem domain, and therefore out of scope of HTTP itself. One piece of guidance would be "don't put the state (or status) of a document into its URL, because that will change".

See this article for more.

What are your thoughts on using an API gateway as an internal enterprise integration hub/gateway?

We have many customers using Apigee Edge as both internal and external hubs/gateways. This is also an investment area for us. If this is an important topic for you, you should ask for a presentation/briefing focused on the topic.

You said that you implemented something similar to GraphQL? Can you share what made you implement that?

The API had a set of well-known URLs of the form /widgets, /thingummies, /doohickeys. We wanted to offer URLs of the form /widgets?{query}, /thingummies?{query} and /doohickeys?{query} and we also needed to offer /query?{query} for queries that "join" across resource types. We had two needs: define a syntax for queries and provide an implementation. We looked at GraphQL, but we were nervous of a design that runs a complex query engine in application space and relies on primitive APIs for raw data access. We have no objective evidence that GraphQL would have been problematic, but the idea made us nervous.

We designed our system so that it stores a copy of all the data in a set of read-only tables in a standard database system with replication and scale-out. This allows us to push the queries down to our database rather than implementing them in a system like GraphQL. This option will not be open to everyone, because you can't always get all the data into a database. If you can, it enables the queries to execute on a standard database query engine and, more importantly perhaps, execute very close to where the data is stored.

We designed a fairly simple query language that happens to be conceptually similar to GraphQL's (although its design predates our exposure to GraphQL) and a simple processor that translates these queries into the query language of our database. Our language is not as rich as GraphQL but has proven very effective. The whole thing is very simple, and has worked well with good performance. Creating the right indexes on the database table(s) can take a little thought, but our experience is that a few well-chosen indexes enable decent performance on a wide range of queries.

Perhaps the biggest lesson we learned is that having a good query capability (in addition to standard HTTP CRUD, of course) is very powerful—people have done all sorts of interesting things on top of our API without ever having to talk to us or request new features. This has also helped avoid the need for "experience APIs" (see response above) layered on top of our API.

If versions are not part of links, how do we make links to new resources existing in v2 but not v1? Or how do we link between multiple APIs—is this making an assumption that the versioning is done at the header level?

Links are always written using URLs of resources that do not contain version numbers. If clients want to request a specific version of a resource, there are two choices. The first is to allow clients to provide an Accept-Version header in their requests. The second is to provide clients with a rule for transforming the URL of a resource into the URL of one of its versions.

Since we can't get people to agree which one they want, we just implement both in the APIs I work on. In my experience, the energy required to implement both is much less than the energy required to argue about it. They both make sense in the HTTP model. I personally prefer the first approach (header), because it doesn't require clients to learn a "rule" that is specific to our application for transforming URLs. The argument for a header would be stronger if someone would standardize Accept-Version so it could be referenced by all applications.

You mentioned the risks of coupling an API entity model to its domain or data model. Can you talk more about those risks and suggest some ways to mitigate them?

The model you expose through an API is the conceptual model of the problem domain as seen by a client. The actual storage model is often more complex than the conceptual model, but it doesn't always have to be. Decoupling them is useful because it allows you to keep the conceptual model simple even if performance or other concerns force compromises in the storage model.

Do you have advice on the use of patterns like idempotency and upsert vs. discrete CRUD?

If you are writing a microservices application rather than a monolith, you will probably face the problem where a single conceptual create, update, or delete requires changes to state stored in more than one microservice. I have not found a very simple way of doing this reliably. My current approach relies heavily on idempotency, but also requires a sort of application-level two-phase commit strategy. It works, but it's not simple.

HTTP has standard support for upsert—it is called PUT. I used to rely on POST for create, and either ignore PUT completely in favor of PATCH, or (shame on me) implemented only half the function of PUT (update but not create). Recently I have been working on some APIs where there is a requirement that clients be able to synchronize the state of the API from a set of external files. This has given me a new understanding of the value of PUT.

What factors should be used on breaking the resources into APIs? Does 100 resources mean 1, 2, 5, or 100 APIs?

As I said above, the word API is a slippery word. In HTTP there are only resources. A simple strategy is to have one URL for each type (e.g. /dogs, /people) plus one URL for each instance (the format of these URLs does not have to be specified unless you allow create via PUT). How many APIs is that? 1? 2? 102? Somewhere in between? I'll let you decide.

Further reading

I've written a number of blog posts on API design, which you can read for more best practices:

Posted in