Introduction to Contract Driven Development with Specmatic

Transcript

Hi, welcome to this demo of contract-driven development where I’m going to use Specmatic, an open-source tool to turn your API specification into executable contracts. We have an app which sends a request to a BFF, a backend for frontend, which in turn sends a request to a domain service. Notice you could have one or many domain services. Once the domain service gets back with a response, we want the BFF to log the message onto a Kafka topic, which allows our analytic server to pick it up and do its thing. The BFF would get back to the application with its response. In order to allow the app and the BFF to independently develop and deploy, we would like to capture the contract between them in an OpenAPI specification. This would capture things like what is the URLs that are being exposed by the BFF, the request parameters, the mandatory optional parameters, and also what is the response that the BFF would give to this request, which means what are all the HTTP statuses that it can get back with, also the schema of the response. Similarly, between the BFF and Domain Service, we would have the OpenAPI specification.

Now, for Kafka, we would capture the specification in something called AsyncAPI, where you can describe what are all the topics that are available and the message formats of those topics. Now, let’s jump into a demo. All right, to start this demo, let’s first start our domain service, which is the Order API. It’s a spring boot app, so I’m just going to get the app started. And then here we have the Domain Service. We need to start Kafka. Let me go here quickly, and you will see here that I’m using Specmatic to stub out Kafka. We’ll get into this detail a little bit later. Let’s get this kicked off, and you’ll see that Specmatic is starting this. It figured out which ports are available. As you can see now it’s listening to messages on this topic, product queries. Let’s also start our BFF layer. Again, a spring boot application which we’re going to kickstart with a gradle command. There we have the spring boot application also started. Let’s make sure all of these things are wired up and working correctly. I’m going to try and make a curl request. I expect when I make this request, one message to come back.

And sure enough, yes, we got one message. This means that all our services are now wired up correctly. With that, we are ready to get started. I have this Open API specification which describes my BFF layer. It’s got a bunch of paths here. There is slash products, which is I can make a post request to create a new product. It can respond with the 201, 400, 503. I also have a find available product which basically takes a query parameter and also a header parameter called page size to get me back a list of products. I can also create orders and so forth. I’m going to use Specmatic plugin, which is built into VS code to run the contract test. There we go. Here you will notice that I am pointing to the BFF API specification, which we just looked at a minute ago. I’m also pointing to where my application is running. It’s running on port 8080. With that, let’s run these tests. Notice that I’ve not written a single line of code at this point. When I run this test, it’s going to go ahead and generate. It’s executing seven contract tests for me.

Where did it find these contract tests? It basically figured out from the OpenAPI specification. Let’s zoom this in. As you can see, it’s made a request to slash products, and it figured out that I can send 1.77 for inventory because it’s a type integer. There is type gadget and then name. It’s generated again a random value and the server responded back with a 201 and gave an ID 4. This we are then saying that this was a successful test. Similarly, it’s made another request and this time you will notice that instead of here, we used Gadget, it’s used type book. How is Specmatic figuring out that it needs to send these things? Let’s quickly go to the OpenAPI specification and look at this section. So you will see here for type, we have defined it as an enum, which is Gadget, Book, Food, Order. And so what Specmatic does is it takes that and iterates through that, and you will see that it’s made a request for each one of these types. And of course, it’s made request now to slash find available products and it’s got a list of products back. It’s validated against the specifications response and it said, yes, this makes sense.

This is all matching and hence this test has succeeded. It’s also try to make a request to find products with type string and it’s got back a 400 error. We’ll get to it in a minute why this happened. Finally, it’s try to make a request to orders with certain product ID and essentially it’s got back a 404, which means this does not exist. One other cool feature of Specmatic is it also shows you an API coverage. Very quickly, you can see what all paths exist within the application, both in your API specification and in your application. It then reports whether it was able to cover it or not. In this case, it found /find available products. There is a get request only on it, and it has 200, 400, and 503. It was able to make two get requests, and those two were covered, but was not able to make any request to 400 or 503. Similarly, it also found a slash health point, which it’s missing in the specification, which means it found it in the application but not in the specification. Wait a second. How is Specmatic figuring out that slash health exists in the application but not in the specification?

So here we use an actuator, which in Springboard comes built-in. Using the actuator, you can figure out what all paths are available. This can be very handy if you want to do any observability. So Specmatic leverage is the same thing and tries to figure out, Okay, I found a slash health endpoint on the application, but I do not see that in the specification. Similarly, it also found a slash orders in the application and it found that missing. However, you can notice that there is what was supposed to be orders. It looks like a typo, which is there in the specification but not in the application and hence it’s saying it’s not implemented similarly slash products. This is cool because now very quickly you get a quick overview of what is there in your specification and also what is there in your application. With Specmatic plugin, we were able to figure out and even execute some tests. Now, let’s try and clean up this and try and get a better coverage. The first thing I want to do is I want to fix this typo in the specification so that we can make it work. Let’s go to right here.

We see that there is a typo, so I’m going to try and fix that typo. With that, let’s run the contract test again right here. Specmatic is going and running these contract tests again. Notice this time, it is basically say, slash orders. Yes, it is available in both places, and it did, in fact, cover it. This health endpoint is interesting. I actually don’t want this health endpoint to be in my specification. It’s purely for monitoring and observability. What I’m going to do is I’m going to use one of the features we have here where we can exclude certain health endpoints or other kinds of health points. Let’s, with that, run this again. Specmatic is going to go ahead and execute all of these tests. This time, you will notice that slash health is not being reported, and we’ve been able to now achieve a 33 % coverage on all of our paths that we have. Thirty-three is great. We have the positive cases, the 200 cases covered. None of the 400 or 503 cases are covered at this point. And also we have two failing tests, as you can see. Out of the total seven tests that is generated, five are successful and two are failing.

Why are the two failing? Because Specmatic has tried to guess certain data and generate that, but that data does not exist. For example, here, we try to create an order with a product ID 674, but this 674 is not actually a valid ID in our database. There isn’t a product with 674 in the database. At this point, what we would need to do is provide examples in our specification so that Specmatic can guide its generation of tests. We’re going to use our plugin to generate examples. I’m going to go ahead and kick that off. You will see here, we are leveraging GPT-4 to generate these examples. All right, there we go. As you can see, Specmatic has been able to leverage GPT-4 to generate relevant examples for our context. Here we can see the difference, what was before and after. In the products, it’s generated an example of a successful request it can make with iPhone, Gadget, and Hundred as the inventory, which makes sense. Similarly, it’s said, Okay, I should get back ID 1, which is a valid ID in our case. This is several different examples, and you can also notice that it’s generated another example for the GET, where we are saying that, When you get, I should get back a product with iPhone, ID 1, a type gadget, and even a little description saying, Latest iPhone model.

Using GPT allows us to generate really relevant examples. I could have manually written all of this, but you could actually generate a lot of these examples leveraging GPT, so why would you want to do this by hand? All right, with that, I think we can close this comparison window. Now, let’s go back and run our contract test. Actually, I’m going to just reuse this window. Let’s clean this and run this. Here we go again. You will notice that this time, Specmatic has generated only three tests, which it used to generate seven tests, but now it’s down to three tests. The reason is now, Specmatic is going to use only the examples that you have provided and use those to generate the tests. In this case, now you can see that we’ve got all our tests passing. We don’t have any more failures. Of course, we have one-one-one of each of these covered, so we’re still maintaining the 33% coverage. The question is, can we do something better? Yes, in fact, let’s go ahead. What I’m going to do now is I’m going to use this feature called Generative Tests. What is generative tests? Let me quickly run this, and then as the test run, I’m going to explain to you what generative test does.

I’m going to clear this out so that you can see what’s going to happen. Let’s go ahead and run this. Wow! You can see now, Specmatic is generating 41 tests. You can see as things are scrolling by, there are some positive, some negative scenarios it’s going to generate. It’s going to generate a whole bunch of different tests and they are going ahead. And wow, you can see that we have 41 tests that are generated, only six succeeded, 35 failed. All right, so how did Specmatic generate 41 tests? We took the inspiration from two things here. One is property-based testing and mutation testing. Let me explain each of these. What is property-based testing? In our case, we understand that we can look at the OpenAPI specification. If a certain field is or a parameter is marked as mandatory, then we know that’s a property of this API that for this particular request, this particular field or parameter is mandatory, and we have to send that. And so if you don’t send it, then you would expect a 404 or some bad request, 400 bad request to come back. And so we can think about these properties of the OpenAPI specification or AsyncAPI specification and leverage that to help us construct a set of tests for us.

Then also to build on that idea, we can look at mutation testing where essentially you can mutate the code and then send requests to the code and see if the test that we’re passing earlier starts to fail now. Instead of doing that exact same thing, we took inspiration from it and changed the idea a little bit where instead of mutating the code, we mutate the request. For example, if something is mandatory and when we send it, we get a 201 back. But in this case, if we don’t send something, then we expect 400 to come back, which is some examples you will see here that we have generated. That’s the combination of property-based testing and mutation testing, which we call as generative tests. That’s what has allowed us to generate these 41 tests for you. However, these 35 tests are failing. Let’s understand why they are failing. They’re saying key-named message is in the response, but not in the specification. Response body message. Why is that happening? Let’s look at one of these requests that it sent. It sent this request to order with some value and essentially count should have been a number. But in this case, we have mutated the value and we have sent, instead of a number, we have sent a string to just make sure that your code can handle this and does not end up in an exception.

And what we see is this negative scenario has failed because the key named message, which is this guy, is not there in the specification. So what is there in the specification? Let’s go to the specification here. And this is the bad request. So as you can see, we have timestamp. Yeah, sure enough, we have status. Okay, sure. We have error. Cool. But here in the specification, we have paths. However, the actual response is a message. Yeah, that makes sense. This looks like, again, a mistake in the specification. This should have been a message. So let’s update the specification with that, and let’s clear this out. And let me run the contract test again and see what happens this time. All right, it’s generating the 41 test again. Cool, and like you can see now, we have finished and we have been able to successfully generate all the tests and all the tests are passing. Wow, this is pretty cool. Let me look at the API specification and you can see that now we have 67 percentage on all three paths. You’d also notice some of the 400 cases are being covered, which is pretty cool.

We now are covering 200 and we are covering 400. As you can see here, let’s look at some of these just to understand what it’s done. So in the beginning, there are a whole bunch of positive scenarios. As you can see, these plus positive scenarios. And then these are the standard ones that we’ve seen before. Let’s scroll down to a negative scenario. So here we have a negative scenario where a name, which is a mandatory and a non-nullable field, we have sent, Specmatic has sent a null, and it’s of course, got a 400 bad request, which is expected in this case. And hence we are saying, Yes, this scenario has succeeded for us. This negative scenario has succeeded. Our application knows how to handle this correctly and give back a 400 response. And of course, we will iterate through all the different enum types and have a test for each one of those. We’ll also see a few other interesting examples where name is sent as 470, name is a string, but we’re sending a number to see what happens and ensure the application is throwing a 400 back, and that also has helped us succeed this test.

So like this, Specmatic figures out and sends different combinations. Again, you can see here, name is sent as a boolean value. And similarly, you’ll also see some of their inventory. We play around with inventory and see if that is being handled. Here you can see type was sent as null and so forth. These are all valid examples of negative tests to make sure that the application can handle all of that. That’s how we arrived at these combinations of 41 tests and we were able to validate both positive and negative scenario. Let me just quickly recap. We started, we had an OpenAPI specification, we got the application running, and then we used Specmatic plugin to generate tests for us. We did not write a single line of code. Just with the specification, it was able to generate seven tests for us. Of course, five tests were passing and two were failing because the examples were missing. But when we did that, we also got an API coverage and we figured out there were some mismatches between the specification and the application. We were able to fix those and we were also able to ignore the slash health endpoint, which we didn’t want to cover in the specification.

We now had tests working. However, the examples were missing. Again, we use Specmatic and leveraging GPT-4. We were able to generate examples for us. With that, we were able to bring down the test count to three specific examples that we were given, and all those three tests were passing. Then we turned on Generative Tests, and with Generative Tests, we were able to generate 41 tests. Initially, quite a few of those tests failed because again, there was a mismatch in the specification. But once we fixed the specification, we were able to see all the 41 tests pass, and now we have pretty good coverage of 67%. However, I’m still worried about this 503. We do not seem to cover this. For that, let’s look at another interesting aspect of Specmatic. If we go back to our slide over here, you would see that we have a domain service running, basically catering to the request that the BFF is sending. In this case, we have a real domain service running and the BFS is connecting to the real domain service. Now, I want to simulate a case where for those 41 tests that I have, I want to make sure that the domain service is responding back with the valid responses like it’s doing now.

However, I want to add another new scenario. In that scenario, I want to make sure that domain service does not respond back in time. Let’s assume that my BFF has a timeout set for three seconds to receive a response back. But BFF, when it contacts the domain service, the domain service takes more than three seconds, let’s say, five seconds to respond back. In that case, I would expect my BFF to give me a 503 back. The service is unavailable and I really can’t do anything. I want to now test this scenario. How do you think we can do this? In the case that I want the 41 test that I already have to respond within the three seconds timeout, but for only the 42nd scenario, I want the domain service to not respond back in time. Well, you could be sitting there watching these tests run, and when the last scenario is about to run, you could shut down the domain service and make sure that it times out and you can simulate this. But how would you do this in your CI pipelines? And also it’s not practical to be able to do this.

So for that, we do have a feature in Specmatic where we will be able to simulate these. But for that, first, I want to not rely on an actual domain service. Instead, I want to stub out the domain service and then do all kinds of fault injection, different scenarios. I’d have full control over that. So let’s see how we can stub out the actual service, the domain service that is running with a Specmatic stub. The good news is that if you already have an OpenAPI specification for this, you could leverage that. But it may also happen that you don’t have an OpenAPI specification already for the domain service. Don’t worry, Specmatic has a feature called proxy through which we will be able to record the interactions between the BFF and domain service and generate an open API specification along with the request response, the stub data, what we call, so that you can replay all the requests exactly the same way back. This is what we call a service virtualization. Let’s look at how that can be done. Let me quickly jump here. Let’s clear this out. I’m going to start a proxy server.

What I’m saying is, hey, it’s Specmatic. This is my target, localhost 8090, which is where the domain service is running, and record all of the interactions in a Recordings folder for me. I’m going to kickstart that. With this, Specmatic says, Okay, I have now a proxy server running on 9000, and that is basically channelling all requests going to 8090. Perfect. Also here in the application properties, you will notice that the OrderAPI, which is our domain service, is running on 8090. Now we wanted to say, Hey, not 8090, go to 9000, where our proxy server is running so that we can channel all the requests. I make that change and let me just quickly restart my BFF layer so it will pick up this change. There we go. It is started. Now, let’s go back to our contract test and rerun the contract test. I’m just going to run the contract test again. You’ll see it’s going ahead and rerunning again those 41 different scenarios. What you will see here, if I go to the proxy, it is now recording all the traffic that is going through the requests that are going through and the response that is coming back.

Looks like it stopped, which means our 41 tests have run and all of them are successful. Notice that the application is none the wiser now. It has just behaved the way it was behaving earlier, except that we have routed all the traffic through this proxy server. Let me shut down this proxy server, and when I shut down the proxy server, you will notice that it has generated the OpenAPI specification for the domain service, and it’s also generated out 13 stubs. You’ll notice that we had 41 tests running, but only 13 stubs have been generated. This is why we call this as Intelligent Service Virtualization, which means it is not just a dumb recording of every request. It’s actually looking at those requests and saying, Yeah, these two requests are similar, I can generalise it, and then it distills it down to these 13 unique requests. Let’s go to the Recordings folder and let’s see this OpenAPI specification that is generated. It’s generated an OpenAPI specification for slash products and it says, okay, there is a get on this which takes parameters in the query called type. It has a bunch of other parameters that it’s expecting.

Then it sends back a response and it’s also nicely reused the response by capturing it over here. You can jump over here and see that ID, inventory, name and type is what the response comes back. This, Specmatic has now recorded several different APIs, endpoints for the domain service and also generated the stub data. We can look at any one of the stub data and see what’s in it. It says, okay, HTTP request slash products, a POST request, and it’s got this in the header, it’s got this in the body, and then the response came back with a status 200 and a ID-37 back. That is just a simple request-response pair that is captured as a stub files, and each of them will have some unique flavour of the request response. So perfect. With that, now I have the OpenAPI specification for the domain service. I also have some stub data for it. And with that, I should be able to tell Specmatic to run now Specmatic in a stub mode and point it to the generated OpenAPI specification saying, Use this generated OpenAPI specification as a way to generate a stub out of it. Again, you’re not writing any line of code here to generate a stub, and you’re actually referring to an open API specification to generate the stub.

This is a big deal because most often people have to write a lot of code to generate these tubs or use some tools to generate these, and they can very quickly drift away and go out of sync. But in this case, because we will be referring to the same OpenAPI specification that the provider is also using to generate contract tests, they don’t drift away. They point to the same single source of truth, which ideally exists in a central Git repo. So anyway, with that, let me quickly run this, and you will notice that it will go ahead and load all the stubs up and say, Okay, I have now got a stub server running for you at port 9000, and you can go ahead and use it. Just to be sure that we are not fooling ourselves, I’m going to go ahead and here in my API, the domain service, order API, I’ve killed it. Now there is no longer the service running and we only have a proxy running at this stage. What do you reckon when I run these tests? What do you expect to see? Well, I expect to see that everything works as before and there is no surprises.

Let’s run the test and see what happens now. I expect that the application would be none-the-wiser. It’ll still go ahead and generate those 41 tests and you would also be able to see at the proxy, there are requests that are coming in and the proxy is responding to all of those requests. There we go. We have all 41 tests that have succeeded. We don’t have a downstream service running the domain service. We are able to completely work off a stub that is generated purely from the OpenAPI specification by Specmatic, again without writing a single line of code. This is why we say this is a no-code solution. Now, of course, we still have not done anything to cover the 503 case because so far all we have tried to do is essentially stub out the downstream service so that we have much better control now and we can simulate the different conditions. With that, let me jump in and show you how we will be able to generate a 503 response in this case. We want to basically go to the generated spec. But before that, actually, let me add another example here which basically says any time I make a GET request for, let’s say, the type other, I want it to generate a response with basically timeout and that would result in a 503.

Let me find the relevant section, so Find Products. We have an example over here which is a success example. I’m going to add another example for timeout. That’s just a name that I’m giving. And that should be value 100, really does not matter. Similarly, for query, I’m going to add, for the query parameter, I’m going to add another example, which I’m going to use as other. Anytime I send other in the query parameter for slash available products, I expect, let’s go to the response here real quick. This is my 503 bad request, so I’m going to go here and I’m going to give an example. So examples, and you can see GitHub Copilot has already guessed that this is the response that you would want. Timeout, a 503 service unavailable because of a timeout. And so with that example in, now I have added another example in my OpenAPI specification, which is essentially expecting any time I send a query parameter as other to get a 403. To make this possible, we will have to go to the generated test data. Let’s look at one of the examples. Here, we have slash products, a GET, and then we are responding back with some valid response.

I’m just going to go ahead and duplicate this. I’m going to make a copy. I’m just going to rename this to stub timeout. In this case, instead of gadget, now I want to put other. Here, and I’m going to go ahead and add a new property which essentially says delay the response to five seconds. Whenever you get a request for slash products, get for a type other, then delay the response back in five seconds. You will notice that as soon as we have updated the stub, Specmatic proxy automatically reloads it, so we have this loaded now. With that, let me quickly go back to our contract test. Now let’s run the contract tests again and see what happens. This time I would expect to run 42 tests, the one new scenario that we’ve added for the timeout. With that, it’s going ahead and running all the tests. And sure enough, as you can see, we have 42 tests that ran. All 42 tests succeeded. Let’s look at this. You would see here that we’ve got 100% API coverage for the slash find available product. You’ll also notice that a 503 case is now covered. How did this happen?

This happened because we were able to simulate a timeout and that resulted in a service unavailable response. Let’s quickly look at where did we generate the 503. You will notice here I made a request to find available products with type other. Whenever we send with type other, we expect the downstream service to basically result in a timeout, and which again, this BFF layer will propagate as a 503 service not available. We’re also validating that the 503 is matching the schema that we have for 503. So with that, we have been able to use Specmatic to generate the contract test. We’re also able to stub out downstream services and do fault injection and other kinds of negative scenarios because we have control over the downstream service through a stub that we generated. And now we are able to get a fairly high degree of confidence that our specification is in fact in line with the actual implementation. So that is, in a short demo, what the power of Specmatic is. Also, I talked about later on, I’ll show you how we were stubbing out Kafka. So here, if I go to this, you will notice that all these requests are coming in, the messages are being posted onto the Kafka topic.

Essentially, this is a Specmatic stub that is running because we don’t really want to have a real instance of a Kafka broker running on this laptop. While you can certainly do it, there will be inherent latency and other kinds of things that you have to deal with. This certainty that you have for your contract test, you would not be able to get. In this case, at the end of this demo, what we’ve been able to achieve is stub out both the dependencies that the BFF layer has, so the domain service we were able to stub out, and we were also able to stub out the Kafka dependency that this service has. Now we have full control over our BFF layer, and we can contract test it, make sure that it is in line with the specification. I’ve been showing you the demo of running these contract tests from the plugin. However, I just want to make sure you can also understand that you can run all these tests from a code. All these changes that we’ve made can be checked in and can be run by other developers on their local machine as well as by your CI pipeline.

Also to run that, Specmatic can generate this one-time contract test code where we essentially just need to configure where our application is running, where our step server is running, where the Kafka mock is running. You can then specify whether you want generative test on or off, and pretty much specify the location of where the stubs are available and start the application. Let me quickly run this test now for you. As you can see, it is kicking off and running these tests. And there we have the 42 tests, the same test that we saw running earlier. All those 42 tests are now running from within my IDE. Same thing can run from the CI pipeline as well. This way you can ensure that these tests are continuously run by other developers and also by your CI pipeline. Cool. That was a quick live demo. Let’s just quickly recap. We have the BFF here, which is a system under test. We were able to use Specmatic contract test to use the OpenAPI specification to generate the test. We were also able to stub out the domain service dependency through a Specmatic HTTP stub, which was based off the OpenAPI specification, of course.

We were also able to stub out the entire Kafka piece with a Kafka mock that was generated using the AsyncAPI. What that essentially did was generated an in-memory broker for us and created the topic that was there in the AsyncAPI specification. We were also able to do schema validations of whether the messages that were posted on the topic were actually schema valid as per the AsyncAPI. Wee initially, we set expectations through those JSON stub files that you saw on the HTTP stub. We were also able to set expectations on the Kafka topics. Then we generated the request from Specmatic to the BFF layer, which went through the stub. The stub responded back. The BFF layer then put the message on the Kafka topic and sent the response back. Whenever the response came, Specmatic was able to validate that the response was in line with the response schema and data types that were specified in the OpenAPI specification. It was also able to then verify whether the number of messages that were posted on the topic and the schema of those messages that were posted were, in fact, as per the AsyncAPI specification.

That is, in a nutshell, how we are able to contract test the BFF layer and make sure that it is in line with the specification and interacts with its downstream dependencies as expected and as specified in their respective specification. If you were to do this without Specmatic, typically you would have to do continuous integration. The consumer would do continuous integration with some a stub that they would have hand-created on their own, and things would all look good in their local environment and the continuous integration environment. Similarly, the provider would do the API testing locally and on the CI. However, when they came to the integration environment, they would realise that maybe there are some disconnects and that would cause problems in integration and the entire environment can become stable. And this also blocks your path to production. And the later you find these issues, the more expensive they get. So the whole idea with contract-driven development is to shift this left and give that feedback as early as possible, ideally on the developer’s laptop. We do this through Specmatic, where we take an OpenAPI specification or AsyncAPI specification and we generate a stub for you, which is service virtualization.

The consumer can now work with this stub as if it was talking to the real thing. We take the same very specification because that’s a single source of truth, and generate tests of that to make sure that the provider is in fact adhering to the specification. This is what ensures that they don’t drift away and they both can independently develop their stuff while being completely in sync. All right, so having a single source of truth is extremely important because even though many teams agree to an OpenAPI specification, but they may miss updating it, or they may not refer to the current version, and you may still end up implementing things in a wrong way and have an integration issue at a later point in time. So what we try to do is we put all of this in a central Git repo. So we take the OpenAPI specification, we create a central Git repo, and we go through a pull request process where we do linting to make sure that the OpenAPI or AsyncAPI specifications are as per the standards that we have agreed. We also then do a compatibility test to make sure that when you’re making any changes to these specifications, you’re not accidentally breaking backward compatibility.

So how does this backward compatibility thing work? So what Specmatic does is it basically takes the new version of the specification and it picks the old version of the specification from the Git repo. And notice earlier I explained that Specmatic can take the very same specification and run that as a stop in service virtualization mode and also run it as tests in a contract testing mode. So what we do is, and this was almost an accidental discovery, I would say, where we take the new specification and we run that as a stub, and we take the old specification, we run that as a test. The old specification will make API requests to the new specification that’s running as a stub. As long as all the old tests pass, then you know that your new version of the API specification is backward compatible. That’s what happens. A real test get executed. It’s not a simple text comparison. These are real tests that get executed. Then once the tests are passed, someone would review and merge this. This will ensure that your single source of truth, which is the central contract, always stays up to date. All right, to just summarise, Specmatic then takes the OpenAPI specification.

The consumers can run their test locally by using contract as stub or service virtualization. The providers can take the contract tests and use Specmatic to generate the contract test to validate whether their implementation is in sync with the specification. The same thing can be leveraged in the CI, where they are all referring to the single source of truth, which is the OpenAPI or AsyncAPI specification on both sides. And when they come to an integration environment, you do not expect to see any surprises and you can get to production as quickly as possible. That’s, in a nutshell, what we call Contract-Driven Development.

We have recently launched Specmatic Insights, which allows teams to visualise the service dependencies in a very visual manner where you can take all the data that is generated by running these contract tests in your pipeline and have this visualisation built out of real data and then see which service is dependent on which other service, what endpoints is it dependent on. And do you have a single point of failure? Do you have a choking point in your architecture? You also be able to drill down into a specific API and look at what are all its consumers, what are its dependencies, and also what type of dependencies it depends on.

Is it a HTTP dependency? Is it a Kafka dependency? You’d also be able to monitor the overall coverage of how things are improving in terms of your CDD adoption. How many endpoints do you have in the central repo? How many of them are being consumed both by the provider and the consumer? And what is the overall API coverage? Is it trending up or trending down? So these insights can help you improve your CDD adoption in your organisation. And just to recap, we can support AsyncAPI, and there we can use JMS. If you have JMS, then you can mock that out. You can also use it for stubbing out databases, the JDBC stub. You can use it for stubbing out readers, and many more such capabilities exist. So do check us out on Specmatic. In. Thank you.