7 API Metrics You Should Monitor [API Monitoring Guide]

17 Jun.,2024

 

7 API Metrics You Should Monitor [API Monitoring Guide]

APIs have become the de-facto standard in building and running modern applications.

Contact us to discuss your requirements of peek api. Our experienced sales team can help you identify the options that best suit your needs.

They are an integral part of the automation workflow of any business and as more users rely on your APIs to power their applications, the need for them to be reliable is important. Any degradation in their health, availability, and performance will impact your business, so ensuring its reliability depends on proactively monitoring your APIs.

Interested in actively monitoring your website's performance?

Get our free ebook on Website Monitoring today.

Download EBook

In this guide, we&#;ll explain what exactly is API monitoring and why do you need it. Different teams inside an organization can benefit from it, so we&#;ll also go through the key API metrics you should keep track of and how to choose the best API monitoring tool for the job.

But first things first.

What Is an API?

An Application Programming Interface (API) enables two systems to communicate with each other. It&#;s like a policy that states how information can be transferred between the two.

APIs are important because they make things easier for you by extending the functionalities and capabilities of tools that you&#;re already using without investing too much in integrations.

There are different types of APIs you could use to leverage all their potential and help your business processes:

  • User-facing API &#; The documented and versioned APIs exposed by your application for your users to integrate your application into their workflow.
  • Internal API &#; The APIs used by your web application frontend or other services in your system.
  • Third-party API &#; These are the APIs that your system depends on. The web application frontend or backend system will depend on these APIs.
  • Webpage URL &#; Although this does not fall into the category of APIs, we can use most of the API monitoring tools to monitor the webpage URLs.

Here are a few examples of APIs we use in our everyday lives:

  • Social media APIs like Facebook and Twitter track your interactions within their platform as well as third parties
  • Paypal is available on pretty much any store which again uses APIs to handle transactions
  • Login and registration is done via internal APIs or third party Authentication APIs like auth0.com

What Is API Monitoring?

Nowadays it is common to use out-of-the-box solutions for common functionalities like Payment, ChatBots, User tracking, etc, But this comes at a disadvantage.

Using someone else&#;s code and infrastructure will take control out of your hand and let you and your users rely on their capabilities. As you can imagine this sparks a need to ensure they provide a fast and reliable service that would at least match yours in terms of performance.

To make sure all your APIs are up and running at all times you&#;ll want to setup API monitoring and here&#;s where Sematext shines. It allows you to keep a close eye on every API integration you have.

API monitoring is exactly that, a way to ensure all your third parties are working as expected and deliver the same quality across all services.

Why Monitor APIs

Many applications depend on APIs to carry on business transactions, which makes them critical for operations. Without knowing if your APIs are available or how are they behaving, you risk creating bottlenecks that affect application performance and end user experience. We looked closely and came up with this list of benefits you get by monitoring APIs.

Proactive monitoring

The main benefit of API monitoring is you get to know when your API is down or if there is performance degradation before your customers inform you. With the right set of information, you can investigate and fix the issues faster. If you have API monitoring as part of the CI/CD pipeline, then you can catch issues before it goes to production.

Measure the impact of performance improvements

You cannot improve something that you cannot measure. API monitoring helps you verify the impact of performance improvements done in the application. With historical data of response times, we could compare the performance before and after the changes.

SLO monitoring

In DevOps culture settings Service Level Objectives (SLO) for the services is important. API monitoring helps you make sure you meet the SLOs and track the Service Level Indicators (SLI) for the service like latency and errors.

Third-party SLA monitoring

If your application depends on third-party APIs and you could monitor those APIs and make sure they are adhering to specified Service Level Agreements (SLA).

How Does API Monitoring Work

API monitoring works by periodically invoking the API from multiple locations around the globe and recording various performance timings and response details. The API monitoring process consists of the following steps:

  • Configure &#; Configure the various parameters for the API like URL, HTTP method, request details, expected values, and locations to run the API checks.
  • Run &#; The API is periodically invoked from the specified location with configured parameters. Then the system records the results of the API invocation like response times, HTTP status, and response details.
  • Alert &#; Checks the actual values against the expected values in the configuration. If they do not match then mark the run as failed and send the alerts.
  • Report &#; We generate reports on availability, response time over a period for historical analysis.

What API Metrics Should You Track?

As part of API monitoring, any good dashboard should include checkups for availability (connectivity), correctness, and performance. Among those, here are the key API metrics you should always measure:

Application API metrics

Application-specific metrics give insight into how well your application is performing independently of other metrics. These are good metrics to use for website benchmark and they are worth tracking over longer periods of time.

Requests Per Minute (RPM)

Requests per minute is a performance metric that measures the number of requests your API will handle per minute. While this is subject to change depending on the day of the week or even the time of day, RPM is the average number of requests.

Latency

Network latency is the time it takes for data or a request to go from the source to the destination. Latency in networks is measured in milliseconds. The closer your latency is to zero, the better. If your latency is high your whole website will suffer and in return, this will negatively impact your user&#;s experience.

Failure Rate

Your APIs will fail, it&#;s not a matter of if but rather when, so it&#;s important to note how many times these happen. Knowing if an API, especially if it&#;s an external one, can fail helps you decide on a course of action that can lead you to either create certain fallback scenarios or switch the service provider altogether.

Infrastructure API metrics

Your APIs will be very dependent on how well your infrastructure is set up and how well it performs. Keeping that in mind, there are a few very important API metrics you should watch out for to ensure your APIs are performing as well as they should.

API Uptime

API uptime enables you to check if you can successfully send a request to the API endpoint and get a response with the expected HTTP status code. Usually, this is calculated based on the number of minutes or hours the server is available during a selected period.

Time to First Hello World

TTFHW will be quite obvious to most developers since more often than not, the first expression you write in a new programming language will output the text &#;hello world&#;. In this scenario Time to first hello world refers to the time the user needs to make his first API transaction since they landed on your web page.

Memory & CPU Usage

It&#;s important to measure the impact your APIs have on your servers. The two most important API infrastructure metrics to watch out for are CPU and memory usage. A high CPU usage can mean the server is overloaded which can cause severe bottlenecks.

Memory usage helps you understand the amount of resource utilization. Based on these two, you can either decide to downgrade your machine or VM, saving a few bucks, or upgrading to ease the stress you put in and to avoid causing a bottleneck.

All of these can be difficult to manage and especially difficult to monitor if you don&#;t have the right tools at hand. Any of them can severely impact API performance. However, more often than not, such issues can easily be avoided if you get a little heads-up. That&#;s where API monitoring tools such as Sematext Synthetics come into play.

They will help you catch errors and build reliable APIs by identifying and resolving the issues before they reach your users. Try Sematext Synthetics and see for yourself how you can benefit.

If you want to see what other similar solutions are available on the market, you can check out our API monitoring tools review.

How to Choose the Right Solution for You

There are lots of solutions available with features ranging from simple ping monitoring to tools that can parse, extract and verify data from the response.

The following are various features that you should look at while evaluating an API monitoring solution. Depending on your requirements &#; who will use it and what types of metrics are they need to measure &#; some of these features may not apply to you.

Test locations

Ability to select locations where your users are from. For example, if most of your users are from India, you need the ability to run your tests from India. If you have a global user base, then schedule the tests to run from at least 5 locations around the globe.

Knowing the information about where the tests are run from will help debug some performance issues or errors. For example, if your service is running in the AWS US East region and the vendor monitors also run from the AWS region, then obviously the response will be faster.

If your service is running behind a firewall, check with the vendor if they can provide a list of IP addresses from where the tests will run, to whitelist them in your firewall.

Customize request settings

Ability to customize request details like headers, request parameters, and request body. If you are planning to monitor lots of APIs it would be useful if there is support to declare common configuration settings and reuse them across monitors.

Response timing metrics

The response time of an API is a combination of DNS, Connect, SSL/TLS handshake, Time To First Byte, and Download times. The ability to track these individual metrics along with total response time will give better insights into the performance of the API endpoint. For example, a higher value of TTFB means there is some performance issue with the backend service. An increase in DNS time denotes an issue with the DNS provider.

Ability to inspect response data to check for failures

Apart from alerting on API connection/HTTP failures, the solution should allow customization of failure conditions based on response details like headers and body. For example, we should be able to check if the response contains a specific header name and value. Support for parsing common response formats like JSON and extract and check if the value of a specific field matches the expected response will help validate the correctness of the response data.

SSL certificate expiry time

SSL certificate expiration is easy to miss. If the webpage or API fails due to an SSL certificate expired error then it will impact the trust in your service. So the ability to monitor and alert before SSL certificate expiration is an important feature in API monitoring tools.

If you want a tool that monitors both APIs and SSL certificates, you should check out Sematext Synthetics, our synthetic monitoring tool that helps you deliver fast and reliable websites by monitoring APIs, website uptime, web transactions, and more. If you want to see how Sematext ups the ante compared to other similar solutions, you can also read this list with the best SSL certificate monitoring tools.

Integration with APM and Logging

An integrated solution with application metrics, logging, and API monitoring under a single roof, will help you quickly debug and understand the reason for failures or slow API response time.

If the logs and metrics of the service exposing the API are present in different applications it will be difficult to correlate the information and switching between these applications to diagnose an issue will cause delays in resolving the issue.

If you&#;re new to logging, you can start with our introductory log management guide where we discuss how it can help your use case. If you already want to combine API monitoring with application logs and metrics for optimum performance of your website or web app, and are looking for a tool that can do this, we have the tool for you! Sematext Cloud, our cloud monitoring tool, allows you to work with both logs and metrics from a single pane of glass.

CI/CD Integrations

Ability to run the monitors as part of the CI/CD pipeline. This could range from Github integration to API to invoke the tests as part of the Jenkins pipeline and verify the results. Also, the option to overlay deployment events on the metric charts will help debug changes in metric values easily.

Alerting

Ability to alert when the API check fails. To minimize alert fatigue and reduce false positives we need support for multiple alert strategies such as based on the run count, time range, etc.

Pricing

Most of the vendors charge API monitoring based on the number of runs. Things that might impact the number of runs are locations to test from and frequency of testing.

For example, let&#;s say a vendor charges 5$ for 10,000 tests per month. This might look like a small amount for a large number of runs. If you want to test your API from 3 locations every 1 minute, then it will execute 129,600 times a month which will cost 64$ per month to monitor one API.

If your use case is for global monitoring of your APIs&#; availability, then your might need a higher limit for runs. If your customers are local to a country then you might need to run the tests only from a few locations.

Security

If your tests use secure credentials like API keys, cookies and passwords make sure they are saved and used securely. This means the ability to declare some parameters as secure, storing the secured parameters in an encrypted manner, and decrypting them only when needed in the destination. Also, ensure the solution runs tests in an isolated environment since, in a SaaS environment, you might share the same infrastructure with other tests.

Sematext Synthetics, our synthetic monitoring tool, features all these API monitoring capabilities and more. It goes beyond API monitoring to help you ensure the peak performance of your app and deliver the best user experience possible. Here&#;s an example of how you can use it:

API Monitoring Made Easy with Sematext Synthetics

To illustrate the significance of API monitoring below are some of the real-world examples where it helped identify issues and understand the impact of software changes on API performance:

Lag in Metrics Ingestion Pipeline

At Sematext, the metrics sent from our agents go through multiple stages before getting persisted in DB and available for queries via API or UI reports. Also, we use our solutions to monitor our system.

The issue started with a below alert in Slack, informing the Metrics Query API monitor is failing:

On selecting the link, we saw the details of the failed run:

The Metrics Query API monitor queries the last 5 minutes of metrics data for a Monitoring App.

Sematext Synthetics HTTP monitor allows you to add custom conditions based on the response data. In this monitor, we have added a custom condition to verify if the length of the returned metrics array should be greater than 0. i.e we should have some data for the last 5 mins.

From the above run details screen we see that even though the API returned a 200 response, there is no data in the response. With Sematext end-to-end visibility of metrics and logs, we looked into the CPU load and logs of the associated service.

Here we can see a correlation between the spikes in CPU load, Kafka commits failed messages, and the Metrics Query API failure. With this, we were able to narrow down the root cause as the consumer of the Kafka message queue.

The above example shows how the ability to check for response data (correctness) rather than just rely on response code or connection success helps in identifying issues like this. Also having access to end-to-end monitoring data and logs helped us identify the root cause quickly, instead of spending time switching between separate tools for API monitoring, logging and application monitoring, and correlating the data between them.

Measuring the Impact of System Migration

Recently we migrated our Logs backend from Elasticsearch 6.x to 7.x.

We provide Elasticsearch compatible APIs to query the Logs App. To monitor this API, we created a Logs Search API monitor. Sematext Synthetics HTTP monitor provides not just total response time but also reports of individual performance timings of an HTTP request like DNS Time, Connect Time, Time To First Byte (TTFB), etc.

In the below chart, we can see a significant drop in the average TTFB of Search API responses which correspond to the migration time. Also, we see during the migration around June 4th to June 15th queries took a long time to run. The small peak at the end around June 23rd is due to another migration to a minor version upgrade. This means the queries are running faster in the newer version. This helped us prove that the migration had an impact on query performance and also look into the impact on query performance during the migration.

Want more information on peek o rings? Feel free to contact us.

Conclusion

Whether you&#;re about to launch a new app or web service, building an API, or already working with one, monitoring them is crucial. Just make sure you choose the right metrics and the best API monitoring solution for your specific use case.

Sematext Synthetics allows you to keep track of the performance of your APIs to always ensure a smooth user experience. Check out the video below to learn more about what the tool has to offer:

 

Try Sematext&#;s API monitoring tool! There&#;s a free 14-day free trial available for you to explore all its functionalities.

Start Free Trial

Share

The Ultimate API Publisher's Guide | by Joyce Lin

Part 4: maintaining your API documentation

So let&#;s assume you&#;ve documented your API. Now what? How do you keep your API docs up to date and accurate?

The documentation should be the most accurate reflection of how your API is expected to function. When the product changes, the documentation must be updated, but it can be challenging to keep API documentation on the same page as the API.

Definition of done

It&#;s not ideal, but it&#;s a common practice for documentation updates to lag behind product updates. Either the documentation is completed as an afterthought, or even worse, the documentation is only completed once someone has a problem with it either internally or externally.

One way to be proactive is to make documentation a required step of the deployment.

The documentation is a part of your API.

Whoever is tasked with determining and assessing the team&#;s definition of done will require adequate documentation before the product can be shipped. This can be handled by your project tracking system with an assigned owner, review process, and deadline. Additionally, this can be reinforced as a part of the user acceptance testing (UAT). If testers are unable to accomplish certain tasks after interacting with your API, the documentation should be fortified.

Versioning

When the product development proceeds quickly, it helps to have a process to version the API as well as the corresponding documentation. You can use existing configuration tools or a manual process to keep things organized.

There are several ways to version your APIs. So how should the corresponding API documentation be versioned?

For minor or patch versions, differences can be called out within the same documentation. For major version differences, it&#;s likely that documentation for both versions will need to be maintained for at least some interim until the earlier version is deprecated. Users should be clearly informed that there&#;s a newer version of the docs, so they can easily navigate to the latest and so they won&#;t be surprised when and if you finally decide to deprecate the earlier version.

Teams with multiple versions of an API have handled this a couple different ways* using Postman collections. BetterCloud created separate collections to reference historical versions of their private APIs. Square included their v1 reference as a separate folder within their publicly available collection.

*Note: the ability to fork a version of your collection, complete a peer review, and then merge is coming soon to Postman.

Continuously improving your API documentation

It&#;s one thing to make sure you have something that your users can reference, but how do you continue improving the documentation and make it more robust? In an ideal world, the continuous improvement of your API documentation goes hand in hand with maintaining your documentation.

Curse of knowledge: a cognitive bias that occurs when someone unknowingly assumes that others share the same basis of knowledge.

Ever hear about the curse of knowledge? This behavior is evident when a new team member with no shared context hears your team speaking with a slew of acronyms and company-specific terminology. Think about the terminology that your team uses that might alienate newcomers.

The more knowledgeable someone becomes about a topic, the more cognitive effort it takes for them to explain it to a newcomer. In fact, this frequently requires an explicit step to put yourself in the shoes of a new user and imagine what they know or don&#;t yet know.

With technology in general, there are so many new tech workers who might have limited experience in the space and can benefit from clear and simple language. With the breakneck growth of APIs in particular, making it easy to consume your API is a market differentiator.

So what does the curse of knowledge mean for someone writing API docs? First and foremost, think about your user.

  • Will a new user be able to get started quickly with a hello world? Once they do, is there a clear path for them to continue learning?
  • If someone lands on a specific page within your documentation, will they be able to understand everything? If not, will they be able to find a reference or more resources in that context?
  • If someone has specific issue that they&#;re dealing with, will they be able to find documentation that will shed more light on their use case? If not, is there an accessible way for them to seek additional resources?

The idea is not to be redundant, or overly verbose. Instead, introduce new concepts and terminology for any of these user scenarios. Provide inline descriptions or hyperlinks to a definition page if you&#;re introducing a new concept within the local context of their experience.

Listen to feedback from your team members

Frequently, the people tasked with writing API documentation are ones with broad knowledge about the API. This might be the developer that is most suited to understanding the underlying technologies. This might be a technical writer who is well versed in the INs and OUTs of the product.

While it&#;s logical for the person or team who is most familiar with the API to also document the API, the curse of knowledge reminds us that it might be more challenging for them to communicate their understanding to others.

When interns or other new people join your team, their feedback is invaluable since it&#;s rare that you&#;ll ever be able to fully put on your new user hat in a way that they can do. Another valuable reviewer is people in surrounding functions who already have an abstract understanding of the API, but may not be well-versed in how it operates under the hood. Their fresh perspective will point out when you&#;re using insider jargon that is incomprehensible to the average user.

Listen to feedback from your users

Think of a time you started poking around in the docs and got lost or overwhelmed. Now think about the last time you came across a typo or inaccuracy in API docs.

Chances are that you stewed on it, but never provided any feedback to the authors of the documentation.

As an API publisher, make it easy for users who are willing to provide feedback to do so. And then listen to the feedback!

Docker offers an example of open-sourcing their documentation on GitHub so that anyone in their community can edit the docs by forking the repository and submitting a pull request. You can also request a docs change by submitting an issue.

PHP offers another example of technical documentation that includes a section for user contributed notes at the bottom of every page. If you&#;re reading something in the docs that doesn&#;t quite make sense, you can ask questions or add your comments directly on that page.

For both the Docker and PHP docs, they have made it easy to provide feedback at the time you&#;re reading through and referencing the docs. It&#;s relevant, it&#;s easy, and you&#;re more likely to do it.

Look at important metrics

For other product feedback, you might be able to look at your metrics, hold focus groups or usability tests, or do market research to get the feedback and validation that you&#;re looking for. For documentation, it&#;s not so straightforward.

If your API documentation is subpar, you might experience lower adoption and usage. But lower than what? Hard to tell.

For web-based API documentation, there are a number of web metrics that can provide insight into optimizing your documentation.

  • Most viewed pages
  • Most clicked hyperlinks
  • User journey from a typical landing page
  • Most searched terms
  • Search terms returning zero results
  • Common referral sources

You can look at trends over time, directional changes after an update, or try A/B testing content, style, and formatting.

Beyond direct documentation metrics, frequently asked questions provide a qualitative and quantitative means of addressing pain points. Tag and identify the top issues from your support ticket platform, forum, bug tracker, or even from face-to-face discussions.

Can these issues be solved more easily with some documentation? If your support team continually answers questions without a resource to link to, this content should be prioritized in the queue.

Can these issues be solved with better documentation? If your support team continues to receive questions about something that&#;s already been documented, this could be attributed to a few reasons.

  • Unidentified gotchas: the API itself might be exhibiting unexpected behavior, and a useful error message can guide the user to the correct solution. If updating the API is not a viable solution, you should call it out and document the accepted solution.
  • Counter-intuitive search and navigation: the documentation might be hidden to the user because they&#;re expecting documentation to be associated with a different concept or workflow or they don&#;t know how to refer to the issue according to the company-specific terminology. This is another example where inline definitions and cross-referencing hyperlinks would help.
  • More clues and context: the documentation alone may not be sufficient, and a step-by-step tutorial or code samples will provide additional clues and context to implement a solution. Providing examples from different perspectives can shed light on a user&#;s particular use case.

This type of feedback will identify edge cases, common gotchas, and inform what needs more clarification in the documentation.

Keeping your documentation up to date in Postman

We talked about tips for maintaining your documentation, and why you should do it. Now let&#;s dig a bit deeper into how to keep your documentation in Postman.

First of all, there are several ways to create documentation in Postman.

  • Automatically generate a web view
  • Embed a Run in Postman button
  • Share a collection link
  • Share a JSON file

While the last three options are not officially &#;documentation&#;, people still use them to fulfill the purposes of documenting their APIs for internal and external audiences, so let&#;s include them in the discussion.

Let&#;s start with automatically generating documentation for your APIs. Postman will generate and host web-viewable documentation based on the metadata in your Postman collection. This documentation can be viewed in a browser, accessed privately within your Postman team or publicly if you choose to publish it.

If you plan on making changes to the API, Postman syncs your updates in real time. Any changes that you save to the underlying collection will be reflected instantaneously in the documentation on the web.

Documentation generated by Postman

The documentation webpage includes a default Run in Postman button at the top that allows users to download a copy of the underlying collection to their instance of Postman. Your users can start interacting with your API right away in the Postman app.

Clicking the Run in Postman button downloads a copy of the collection.

If you published your collection(s) in the Postman API Network, the button is refreshed whenever you save changes to the underlying collection. You don&#;t need to worry about keeping your button updated since that step happens automatically. However, anybody that has previously downloaded the collection will be working off the version they downloaded.

Importing a collection from the API Network

Another option for sharing API documentation is to create a stand-alone Run in Postman button. Some publishers will embed the button in a blog post or in the README file of a repository. Once again, users will work off the version of the collection they download. Notice there&#;s a different process to update the underlying collection. API publishers must manually refresh the collection button, and then users can download the latest collection.

The same rule applies if you&#;re using a collection link to send to a co-worker or collaborator. Users will work off the collection they import at that point in time. Updating the underlying collection requires the person sharing the collection to manually refresh the collection link, and then all users can import the latest collection to work off that version.

The last option for sharing API documentation is to share a physical file. In the case of Postman, you can export a JSON file of the Postman collection from the Postman app. Although frequently used, this is the least attractive option if you&#;re in the process of developing an API and changes are inevitable. In this scenario, version control is cumbersome. To maintain any changes in a collaborative scenario, you will need to pair this with some other version control system such as checking the file into git.

In a different scenario, if you&#;re documenting a transient use case perhaps for debugging, it&#;s very easy to send over a collection link or physical file to reproduce the issue for a colleague.

With all these options for sharing your collection&#;s documentation, you may be wondering which one to use. That&#;s up to you, but some thoughts for your consideration.

  • Maintainability
  • Describability
  • Accessibility
  • Discoverability

Maintainability: If your collection is in a state of development, it&#;s likely to change and people may be providing feedback. In this case, ensuring that everyone is reviewing the same version is important. On the other hand, perhaps your collection is pretty well-baked, changes are unlikely, and you just want to allow people to reference it. Keeping track of the latest version becomes less important.

Describability: If this is an internal collection and most collaborators already have a handle on how the API works and functions, then you may not need to fully explain and describe what&#;s going on. It&#;s a gamble, but some people are in this lucky boat. If you&#;re in an organization with new team members, partners, or external consumers, then teaching them how to use the API is necessary.

Accessibility: Controlling how other people access your Postman collection is fully within your control. Permissions for a web-viewable collection can be limited to the individual, the team, or opened up to the broader public. Access to a collection via the Run in Postman button, collection link, or JSON file is based on whom you share it with. If you someone a collection link, and they forward the on to someone else, anyone with the collection link can access your collection.

Sometimes your API documentation is used by non-technical team members, or those who might not be Postman users. Web-browsable documentation can be published so that anyone with an internet connection can access and reference it.

Discoverability: Along the same lines as allowing your users to access the documentation, allowing your users to discover your documentation is also important. For publishers who want their API to be discovered by external consumers, there&#;s an option for Postman users publishing their documentation to submit their API to the Postman API network. This allows other Postman users to search for and import a collection into their local instance of Postman.

This doesn&#;t mean the other options are not discoverable, however the ability for others to discover your API is not inherent in the mechanism for documentation. With these options, discoverability depends on how and where you share your collection, like embedding a stand-alone Run in Postman button on a tutorial located on your developer portal.

If you want to learn more, please visit our website surface safety valve.