.NET Core 3 was recently released and brought with it a bunch of innovations. Besides C# 8 and support for WinForms & WPF, the new release added a brand new JSON (de)serializer. This new serializer goes by the name System.Text.Json and as the name suggests, all its classes are in that namespace.
This is a big deal. JSON serialization is a big factor in web applications. Most of today’s REST API relies on it. When your javascript client sends a JSON request in a POST body, the server uses JSON deserialization to convert it to a C# object. And when the server returns an object in its response, it serializes that object into JSON for your JavaScript client to understand. These are major operations that happen on every request with objects. Their performance can significantly impact application performance as you’re about to see.
If you’ve been working with .NET for some time, then you should know the excellent Json.NET serializer, also known as Newtonsoft.Json. So why do we need a new serializer if we already got Newtonsoft.Json? While Newtonsoft.Json is great, there are several good reasons to replace it:
- Microsoft wanted to make use of the new types like
Span<T>
to improve performance. Modifying a huge library like Newtonsoft without breaking functionality is very difficult. - Most network protocols, including HTTP, use UTF-8 text. The type
string
in .NET is UTF-16. Newtonsoft transcodes UTF-8 into UTF-16 strings in its work, compromising performance. The new serializer uses UTF-8 directly. - Since Newtonsoft is a 3rd party library and not part of the .NET Framework (BCL or FCL classes), you might have projects with dependencies on different versions. ASP.NET Core itself is dependent on Newtonsoft, which results in many version conflicts .
In this blog post, we’re going to do some performance benchmarks to see just how much the new serializer improved performance. Except that we’re also going to compare both Newtonsoft.Json and System.Text.Json to other major serializers and see how they fare against each other.
The Battling Serializers
Here’s our lineup:
- Newtonsoft.Json (also known as Json.NET) – The current industry-standard serializer. Was integrated into ASP.NET even though it was 3rd party. #1 NuGet package of all times. Award-winning library (probably, I don’t know).
- System.Text.Json – The brand new serializer by Microsoft. Supposedly faster and better than Newtonsoft.Json. Integrated by default with the new ASP.NET Core 3 projects. It’s part of the .NET framework itself, so there are no NuGet dependencies needed (and no more version conflicts either).
- DataContractJsonSerializer – An older, Microsoft-developed serializer that was integrated in previous ASP.NET versions until Newtonsoft.Json replaced it.
- Jil
– A fast JSON serializer based on Sigil
- ServiceStack
– .NET serializer to JSON, JSV, and CSV. A self-proclaimed fastest .NET text serializer (meaning not binary).
- Utf8Json
– Another self proclaimed fastest C# to JSON serializer. Works with zero allocations and read/writes directly to the UTF8 binary for performance.
Note that there are non-JSON serializers that are faster. Most notably, protobuf-net is a binary serializer that should be faster than any of the compared serializers in this article (though not verified in the benchmarks).
Benchmark structure
It’s not so easy to compare serializers. We’ll need to compare both serialization and deserialization. We’ll need to compare different types of classes (small and big), Lists, and Dictionaries. And we’ll need to compare serialization targets: strings, streams, and char arrays (UTF-8 arrays). That’s a pretty big matrix of benchmarks, but I’ll try to do it as organized and concise as possible.
We’ll test 3 different functionalities:
- Serialization to string
- Serialization to stream
- Deserialization from string
- Requests per second with an ASP.NET Core 3 application
For each, we’ll test different types of objects (which you can see in GitHub ):
- A small class with just 3 primitive-type properties
- A bigger class with about 25 properties, a
DateTime
and a couple of enums - A List with 1000 items (of the small class)
- A Dictionary with 1000 items (of the small class)
It’s not all of the required benchmarks, but it’s a pretty good indicator I think.
For all benchmarks, I used BenchmarkDotNet
with the following system: BenchmarkDotNet=v0.11.5, OS=Windows 10.0.17134.1069 (1803/April2018Update/Redstone4) Intel Core i7-7700HQ CPU 2.80GHz (Kaby Lake), 1 CPU, 8 logical and 4 physical cores. .NET Core SDK=3.0.100. Host : .NET Core 3.0.0 (CoreCLR 4.700.19.46205, CoreFX 4.700.19.46214), 64bit RyuJIT
. The Benchmark project itself is on GitHub
.
Everything will be tested only in .NET Core 3 projects.
Benchmark 1: Serializing to String
The first thing we’ll test is serializing our different objects to a string.
The benchmark code itself is pretty straightforward (see on GitHub ):
public class SerializeToString<T> where T : new()
{
private T _instance;
private DataContractJsonSerializer _dataContractJsonSerializer;
[GlobalSetup]
public void Setup()
{
_instance = new T();
_dataContractJsonSerializer = new DataContractJsonSerializer(typeof(T));
}
[Benchmark]
public string RunSystemTextJson()
{
return JsonSerializer.Serialize(_instance);
}
[Benchmark]
public string RunNewtonsoft()
{
return JsonConvert.SerializeObject(_instance);
}
[Benchmark]
public string RunDataContractJsonSerializer()
{
using (MemoryStream stream1 = new MemoryStream())
{
_dataContractJsonSerializer.WriteObject(stream1, _instance);
stream1.Position = 0;
using var sr = new StreamReader(stream1);
return sr.ReadToEnd();
}
}
[Benchmark]
public string RunJil()
{
return Jil.JSON.Serialize(_instance);
}
[Benchmark]
public string RunUtf8Json()
{
return Utf8Json.JsonSerializer.ToJsonString(_instance);
}
[Benchmark]
public string RunServiceStack()
{
return SST.JsonSerializer.SerializeToString(_instance);
}
}
The above benchmark class is generic, so we can test all of the different objects with the same code, like this:
BenchmarkRunner.Run<SerializeToString<Models.BigClass>>();
After running all class types with all the serializers, here are the results:
The actual numbers of the results can be seen here
- Utf8Json is fastest by far, over 4 times faster than Newtonsoft.Json and System.Text.Json. This is a pretty amazing difference.
- Jil is also very fast, about 2.5 times faster than Newtonsoft.Json and System.Text.Json.
- The new serializer System.Text.Json is doing better than Newtonsoft.Json in most cases by about 10%, except for Dictionary where it does worst by 10%.
- The older DataContractJsonSerializer is worst than all others by far.
- ServiceStack is right there in the middle, showing that it’s no longer the fastest text serializer. At least not for JSON.
Benchmark 2: Serializing to Stream
The second set of benchmarks is very similar, except that we serialize to stream. The benchmark’s code is here . And the results:
The actual numbers of the results can be seen here . Thanks to Adam Sitnik and Ahson Khan for helping me to get System.Text.Json to work.
The results are pretty similar to before. Utf8Json and Jil are as much as 4 times faster than the others. Jil performed better, a close second to Utf8Json. DataContractJsonSerializer is still slowest in most cases. Newtonsoft actually performed than before – as well as System.Text.Json for most cases and better than System.Text.Json for Dictionary.
Benchmark 3: Deserializing from String
The next set of benchmarks is about deserialization from string. The benchmark code can be found here .
The actual numbers of the results can be seen here
I had a hard time with DataContractJsonSerializer on this one, so it’s not included. We can see that in deserialization Jil is fastest, with Utf8Json a close second. Those two are 2-3 times faster than System.Text.Json. And System.Text.Json is about 30% faster than Json.NET.
So far, looks like the popular Newtonsoft.Json and the newcomer System.Text.Json have significantly worse performance than some of the others. This was pretty surprising to me due to Newtonsoft.Json’s popularity and all the hype around Microsoft’s new top-performer System.Text.Json. Let’s test it even further in an ASP.NET application.
Benchmark 4: Requests per Second by a .NET server
As mentioned before, JSON serialization is so very important because it constantly occurs in REST API. HTTP requests to a server that use the content-type application/json
will need to serialize or deserialize a JSON object. When a server accepts a payload in a POST request, the server is deserializing from JSON. When a server returns an object in its response, it’s serializing JSON. Modern client-server communication heavily relies on JSON serialization. That’s why to test a “real world” scenario, it makes sense to create a test server and measure its performance.
I was inspired by Microsoft’s performance test where they created an MVC server application and tested requests per second. Microsoft’s benchmark tests System.Text.Json vs Newtonsoft.Json. In this article, we’re going to do the same, except that we’re going to compare them to Utf8Json which proved to be one of the fastest serializer in the previous benchmarks.
Unfortunately, I wasn’t able to integrate ASP.NET Core 3 with Jil, so the benchmark doesn’t include it. I’m absolutely sure it’s possible with some more effort.
Building this test proved more challenging than before. First, I created an ASP.NET Core 3.0 MVC application, just like in MS’s test. I added a controller for the performance tests that’s kind of similar to the one in MS’s test :
[Route("mvc")]
public class JsonSerializeController : Controller
{
private static Benchmarks.Serializers.Models.ThousandSmallClassList _thousandSmallClassList
= new Benchmarks.Serializers.Models.ThousandSmallClassList();
[HttpPost("DeserializeThousandSmallClassList")]
[Consumes("application/json")]
public ActionResult DeserializeThousandSmallClassList([FromBody]Benchmarks.Serializers.Models.ThousandSmallClassList obj) => Ok();
[HttpGet("SerializeThousandSmallClassList")]
[Produces("application/json")]
public object SerializeThousandSmallClassList() => _thousandSmallClassList;
}
When the client will call the endpoint DeserializeThousandSmallClassList
, the server will accept a JSON text and deserialize the content. This tests deserialization. When the client will call SerializeThousandSmallClassList
, the server will return a list of 1000 SmallClass
items and by doing that will serialize the content to JSON.
Next, we need to cancel logging on each request so it won’t affect the result:
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.ClearProviders();
//logging.AddConsole();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
Now we need a way to switch between System.Text.Json, Newtonsoft, and Utf8Json. For the first two, the switch is easy. Doing nothing will work with System.Text.Json. To switch to Newtonsoft.Json, just add one line in ConfigureServices
:
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews()
//Uncomment for Newtonsoft. When commented, it uses the default System.Text.Json
.AddNewtonsoftJson()
;
For Utf8Json, we’ll need to add custom InputFormatter
and OutputFormatter
media formatters. This was a bit challenging, but eventually, I found a good solution online
and after a few tweaks it worked. There’s also a NuGet package
with the formatters, but it doesn’t work with ASP.NET Core 3.
internal sealed class Utf8JsonInputFormatter : IInputFormatter
{
private readonly IJsonFormatterResolver _resolver;
public Utf8JsonInputFormatter1() : this(null) { }
public Utf8JsonInputFormatter1(IJsonFormatterResolver resolver)
{
_resolver = resolver ?? JsonSerializer.DefaultResolver;
}
public bool CanRead(InputFormatterContext context) => context.HttpContext.Request.ContentType.StartsWith("application/json");
public async Task<InputFormatterResult> ReadAsync(InputFormatterContext context)
{
var request = context.HttpContext.Request;
if (request.Body.CanSeek && request.Body.Length == 0)
return await InputFormatterResult.NoValueAsync();
var result = await JsonSerializer.NonGeneric.DeserializeAsync(context.ModelType, request.Body, _resolver);
return await InputFormatterResult.SuccessAsync(result);
}
}
internal sealed class Utf8JsonOutputFormatter : IOutputFormatter
{
private readonly IJsonFormatterResolver _resolver;
public Utf8JsonOutputFormatter1() : this(null) { }
public Utf8JsonOutputFormatter1(IJsonFormatterResolver resolver)
{
_resolver = resolver ?? JsonSerializer.DefaultResolver;
}
public bool CanWriteResult(OutputFormatterCanWriteContext context) => true;
public async Task WriteAsync(OutputFormatterWriteContext context)
{
if (!context.ContentTypeIsServerDefined)
context.HttpContext.Response.ContentType = "application/json";
if (context.ObjectType == typeof(object))
{
await JsonSerializer.NonGeneric.SerializeAsync(context.HttpContext.Response.Body, context.Object, _resolver);
}
else
{
await JsonSerializer.NonGeneric.SerializeAsync(context.ObjectType, context.HttpContext.Response.Body, context.Object, _resolver);
}
}
}
Now to have ASP.NET use these formatters:
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews()
//Uncomment for Newtonsoft
//.AddNewtonsoftJson()
//Uncomment for Utf8Json
.AddMvcOptions(option =>
{
option.OutputFormatters.Clear();
option.OutputFormatters.Add(new Utf8JsonOutputFormatter1(StandardResolver.Default));
option.InputFormatters.Clear();
option.InputFormatters.Add(new Utf8JsonInputFormatter1());
});
}
So this is it for the server. Now for the client.
C# Request-per-Second Client
I built a client application in C# as well, though most real-world scenarios will have JavaScript clients. For our purposes, it doesn’t really matter. Here’s the code:
public class RequestPerSecondClient
{
private const string HttpsLocalhost = "https://localhost:5001/";
public async Task Run(bool serialize, bool isUtf8Json)
{
await Task.Delay(TimeSpan.FromSeconds(5));
var client = new HttpClient();
var json = JsonConvert.SerializeObject(new Models.ThousandSmallClassList());
// Warmup, just in case
for (int i = 0; i < 100; i++)
{
await DoRequest(json, client, serialize);
}
int count = 0;
Stopwatch sw = new Stopwatch();
sw.Start();
while (sw.Elapsed < TimeSpan.FromSeconds(1))
{
count++;
await DoRequest(json, client, serialize);
}
Console.WriteLine("Requests in one second: " + count);
}
private async Task DoRequest(string json, HttpClient client, bool serialize)
{
if (serialize)
await DoSerializeRequest(client);
else
await DoDeserializeRequest(json, client);
}
private async Task DoDeserializeRequest(string json, HttpClient client)
{
var uri = new Uri(HttpsLocalhost + "mvc/DeserializeThousandSmallClassList");
var content = new StringContent(json, Encoding.UTF8, "application/json");
var result = await client.PostAsync(uri, content);
result.Dispose();
}
private async Task DoSerializeRequest(HttpClient client)
{
var uri = HttpsLocalhost + "mvc/SerializeThousandSmallClassList";
var result = await client.GetAsync(uri);
result.Dispose();
}
}
This client will send continuous requests for 1 second while counting them.
Results
So without further ado, here are the results:
The actual numbers of the results can be seen here
Utf8Json outperformed the other serializers by a landslide. This shouldn’t come as a big surprise after the previous benchmarks.
For serialization, Utf8Json is 2 times faster than System.Text.Json and a whole 4 times faster than Newtonsoft. For deserialization, Utf8Json is 3.5 times faster than System.Text.Json and 6 times faster than Newtonsoft.
The only surprise here is how poorly Newtonsoft.Json performed. This is probably due to the UTF-16 and UTF-8 issue. The HTTP protocol works with UTF-8 text. Newtonsoft converts this text into .NET string
types, which are UTF-16. This overhead is not done in either Utf8Json or System.Text.Json, which work directly with UTF-8.
It’s important to mention that this benchmark should be taken with a grain of salt since it might not fully reflect a real-world scenario. Here’s why:
- I ran everything on my local machine, both client and server. In a real-world scenario, the server and client are on different machines.
- The client sends requests one after another in a single thread. This means the server doesn’t accept more than one request at a time. In a real-world scenario, your server will accept requests on multiple threads from different machines. These serializers may act differently when serving multiple requests at a time. Perhaps some use more memory for better performance, which will not do so well with multiple operations at a time. Or perhaps some create GC Pressure . This is not likely with Utf8Json which does no allocations.
- In Microsoft’s test , they achieved much more requests per second (over 100,000 in some cases). Sure, this probably has to do with the above 2 points and a smaller payload, but still, this is suspicious.
- Benchmarks are easy to get wrong. It’s possible that I missed something or that the server can be optimized with some configuration.
Having said all of the above, these results are pretty incredible. It seems that you can significantly improve response time by changing a JSON serializer. Changing from Newtonsoft to System.Text.Json will improve requests amount by 2-7 times and changing from Newtonsoft to Utf8Json will improve by the huge factor of 6 to 14. This is not entirely fair because a real server will do much more than just accept arguments and return objects. It will probably do other stuff as well, like go to a database and so some business logic, so serialization time might play a lesser role. Still, these numbers are pretty incredible.
Conclusions
Let’s do a summary of everything so far:
- The newer System.Text.Json serializer is in most cases faster than Newtonsoft.Json in all benchmarks. Kudos to Microsoft for a job well done.
- 3rd party serializers proved to be faster than both Newtonsoft.Json and System.Text.Json. Specifically Utf8Json and Jil are about 2-4 times faster than System.Text.Json.
- The requests-per-seconds scenario showed that Utf8Json can be integrated with ASP.NET and significantly increase request throughput. Thought as mentioned, this is not a full real world scenario and I suggest doing additional benchmarks if you plan to change serializers in your ASP.NET app.
Does this mean we should all change to Utf8Json or Jil? The answer to that is… maybe. Remember that Newtonsoft.Json stood the test of time and became the most popular serializer for a reason. It supports a lot of features, was tested with all kinds of edge cases and has a ton of documented solutions and workarounds. Both System.Text.Json and Newtonsoft.Json are extremely well supported. Microsoft will continue to invest resources and effort into System.Text.Json so you’re going to get excellent support. Whereas Jil and Utf8Json have had very few commits in the last year. In fact, it looks like they didn’t have much maintenance done in the last 6 months.
One options is to combine several serializers in your app. Change to the faster serializers for ASP.NET integration to achieve superior performance, but keep using Newtonsoft.Json in business logic to leverage its feature set.
I hope you enjoyed this one, cheers.
Other Benchmarks
Several other benchmarks compare the different serializers:
- On Microsoft’s announcement of System.Text.Json, they released their own performance benchmark comparing between System.Text.Json and Newtonsoft.Json. In addition to serialization and deserialization, this benchmark compares Document class for random access, Reader, and Writer. They also release their Requests-per-second test that inspired me to do my own.
- .NET Core’s GitHub repository includes a bunch of benchmarks similar to the ones in this article. In fact, I looked very closely at their benchmarks to make sure I’m not doing any mistakes. You can find these in the Micro-benchmarks solution .
- Jil have their own benchmarks that compare Jil, Newtonsoft, Protobuf, and ServiceStack.
- Utf8Json published a set of benchmarks available on GitHub . These also compare to binary serializers.
- Alois Kraus did a great in-depth benchmark between the most popular .NET serializers, including JSON serializers, Binary serializers, and XML serializers. His benchmark includes both .NET Core 3 and .NET Framework 4.8 benchmarks.
My own take:
Newtonsoft.Json has vastly superior features to any of the other serializers.
System.Text.Json is conceptionally superior as it does not need to buffer, but lacks in both features and raw performance, but still has low allocations.
Jil has similar capabilities via TextWriter/Reader support, but has worse allocation usage than i.e. Utf8Json (you should always include allocations in your benchmarks for serializers)
Utf8Json buffers the full input/output in memory before flushing, but has a very high performance.
Caveat emptor, my own json serializer SpanJson is as fast or faster than Utf8Json, but buffers just as Utf8Json, but has native utf8 and utf16 support, so it has its own niche.
I wholeheartedly agree on your conclusion that combining some serializers might be a way to go, personally I still hope that the feature set of System.Text.Json improves and that some of the performance problems can be mitigated. The underlying JSON types (Utf8JsonReader/Writer) are very fast. The serializer itself has a few knobs to tune though, i.e. disable html/js safe escaping (like it's done in aspnetcore 3), but the superior concept and other requirements have certain downsides too.
Examples:
Utf8Json/SpanJson/Jil have automate based deserialization methods for fast matching of json properties, this is not yet supported in System.Text.Json as things like case-insensitive matching makes the whole problem vastly more complicated.
Some of the conceptional decisions in System.Text.Json simply need more instructions than the highly optimized and low overhead approaches of either Utf8Json/SpanJson or Jil. Async support has quite a bit of overhead etc.
Still, as for serialization the chances are high that performance of System.Text.Json can be increased to similar levels as i.e. Jil.
As for deserialization, look at SIMDJson by Prof. Lemire et.al (or the port SimdJsonSharp by Egor Bogatov) for an outlook what the future might bring.
Thanks for awesome feedback. SpanJson looks great.
This all makes me wonder whether my request/sec benchmark really does reflect a real-world scenario for Utf8Json. If it buffers full input/output in advance, then there's a high probability it will have poor performance in a high-load server that accepts multiple requests at a time.
The RPS benchmark has other problems, as listed below in another reply further down, but buffering is not one of them.
All the serializers with a modern design use ArrayPools to amortize the allocations, that's why Utf8Json/SpanJson and even new System.Text.Json basically have the same performance on the recent TechEmpower Benchmarks. That payload is extremely small in the benchmark, around 35 bytes or something.
So what happens for larger payloads?
Up to the initial buffer size (8-16kb depending on the lib)? Nothing, they all pretty much behave the same, the buffer is filled and after the serialization is done, the buffer is flushed to the output pipe/stream.
After that size it gets interesting. System.Text.Json is capable of flushing the data and reusing the old small buffer, Utf8Json/Spanjson will rent a new buffer from the pool and copy the data and continue.
The initial buffer size in spanjson is very small 256 byte, but it tracks the last payload size in the serializer for more correct buffer sizes later on.
In general memory speed is way higher than serialization speed, so even for heavily threaded scenarios it does not really matter as memory is already allocated in the pool.
There will be a break even somewhere along the road where a buffered approach is slower than flushing small blocks. A few proof of concepts in SpanJson placed that break even for non-async code at ~100kb. The flushing approach was roughly 20%-40% slower for small sizes and maybe 30% faster for huge payloads of 100Mb. And
With the requirement to flush to an async output (i.e. pipe) the break-even might only exist in theory or on high-load benchmark systems, simply because Async code still has a considerable overhead if the code needs to await.
(Initial async prototypes in SpanJson were 2 or 3 times slower than the sync version).
So why flushing data in small blocks?
Two things immediately come to mind:
David Fowler wrote it nicely in https://github.com/aspnet/A... and that's protection against DoS.
If your serializer needs to deserialize data from an untrusted source allowing boundless memory usage might not be the wisest approach.
They even changed the Newtonsoft.Json formatter to buffer into a file after an initial memory buffer of 32kb to follow that security practice in ASP.NET Core 3.0
Wow, thanks for another great response.
You said "All the serializers with a modern design use ArrayPools to amortize the allocations, that's why Utf8Json/SpanJson and even new System.Text.Json basically have the same performance on the recent TechEmpower Benchmarks."
I still don't understand why they have the same performance in the TechEmpower benchmark. So they all use ArrayPools, but the serialization is still faster in Utf8Json and SpanJson than in System.Text.Json for a single serialization. So does the waiting for memory rent is the thing that keeps them same?
The TechEmpower benchmark uses the following payload: {"message":"Hello, World!"}
There is simply not enough to do which could make a difference in speed, the payload is small enough for the initial buffer of every serializer. All the codegen serializers will have some kind of pre-baked array for the property name, most might also include the double quotes and the colon directly baked into the array. The value does not have any escapable characters.
So at the end it's for all the modern serializers just a question of how much overhead each individual serialization call has and as for .NET core and code-gen (be it IL or expression trees) that's pretty much the same.
For example: The main difference between Utf8Json and SpanJson for this benchmark is actually Task vs. ValueTask for the CopyToOutput method. Utf8Json always awaits the task and SpanJson checks for IsCompleted (which is always true for that small payload) and does not need to await it, so SpanJson is a tiny tiny bit faster per call.
Got it, Thanks!
Nice. I recently converted my project from JSON.NET to System.Text.Json. It was an easy transition. I did find though some features of JSON.NET that are not implemented in the System.Text.Json. We have some requirements around filtering and patching JSONs on the fly based on configuration rules. So JSON.NET lives on in the project though with a smaller footprint.
have you guys heard of simdjson?
I saw it in a dev conference yesterday and seems to be super fast.
has anybody used it? bench marked it?
https://github.com/lemire/s...
Yes. See my post in the addenum.
I'm quite tired of MS doing these changes from version to version in Core. They should think twice next time. Core is mess, full of issues on github and it's not getting better. My experience with this for example, [JsonIgnore] if used with Newtonsoft now expose all fields to API. They should mention this to somewhere.
I wanted to point out that if you repeat this test with much larger objects ( ~25kb object, such as long string content etc ), the results change dramatically. In my tests with large classes, Jil and Utf8Json are 2x slower than Json.NET and System.Test.Json.
This gets more pronounced the bigger the object gets. With a 35 kb object for example, the humble Json.NET took 34 microseconds while Jil took 91 microseconds. ( Utf8Json was at 75 us ).
Always do your own benchmarks for your specific use case - the results may surprise you.
Interesting!
Hi, interesting benchmarks! I did have a question though. In the graph "Serialize to string" for the Big Class, in what measurement are these results? They don't match with the resulttable on Github "Serialize to string, big class".
Great job with the investigation
Thanks!
Can you post the error on Stackoverflow with the Benchmark 2: Serializing to Stream
I'm kind of exhausted from this article, sorry. But you're welcome to clone the repo and do it yourself. Besides, the MicroBenchmarks from Microsoft mentioned are also missing this benchmark. Probably have had the same issues. https://github.com/dotnet/p...
You could add NetJson. Jil is pretty good. There are some differences in the way enum is serialized among all those libraries. I would like to recommend you to have an article on the way enum and other data types are serialized, because some libraries may produce different outcomes. So, not all libs are compatible to each other. If someone is willing to migrate Newtonsoft to a different library, de/serialized JSON of both must be compared first to check if there are any breaking changes.
Thanks for the tips!
You are aware of: https://aloiskraus.wordpres... which goes into much greater with many more serializers? This should still be the most complete and up to date serializer performance benchmark for .NET Framework and .NET Core. You test code looks okish but my test suite tests with large payloads and MemoryStreams because I want to be able to directly de/serialize from an UTF8 based file or binary network stream to an object and vice versa. There is also a comparison of SimdJson which promises to parse Json with GB/s speed which is very impressive.
I wasn't aware of it Alois. Great work on that one. Adding this benchmark to the "Other benchmarks" reference
Thanks!
The author of Newtonsoft.Json (James Newton-King) moved a while back to work in the ASP.NET Core team at Microsoft. So it's maybe safe to assume he's had some input into System.Text.Json, to it's betterment.
That makes sense but looking at the GitHub repo history I don't see him committing in there
https://github.com/dotnet/c...
https://github.com/JamesNK
There are limitations with System.Text.Json, like string-based-key dictionary support. They have advised to stick to Newtonsoft for this scenario
Great article, good job done!
Thanks
Good article, find it helpfull
So why didn't microsoft just adopt utf8json then? Since it wins those benchmarks so far ahead...
One of the goals was not to rely on a 3rd party library to prevent version conflicts. Instead, the new serializer is part of the .NET framework.
Besides that, I guess they wanted something proprietary.
Nice writeup! Thanks for doing all these benchmarks.
That's not necessarily a bad thing :)
If a library achieves its primary purpose well and doesn't have any major bugs, perhaps it doesn't need any changes.
Thanks. Maybe they don't have major bugs. Still, they both have open issues and usually there are feature requests.
I wish i could see JIL in the last benchmark it's fairly easy to integrate it's took me just few hours but I'm using .net core 2.2 don't really know what changed in changed in .net core 3 anyway Good Job thanks for the article I'll not upgrade to .net core 3 anymore
Yeah I really wish I could have included it too. I had a hard time with .NET Core 3 and I just ran out of time and patience trying to make it work.
Sorry, but this benchmark is invalid for a lot of reasons.
First being, your load test only lasts one second, which is not sufficiently enough. Parsers which rely full on allocations for buffering will appear to be faster, since 1 second is not enough to saturate the garbage collection.
Second, you are calling one request by another, which do not saturate the CPU sufficiently and the async is not profiting enough from it. You need to use a proper benchmarking tool like https://github.com/wg/wrk
Also, you need two computers and a fast enough LAN connection (1 Gbit LAN, better 10 GBit LAN) and not run both on the same machine, because the client will create higher load the more request it does. But the higher the load, the less CPU time is remaining for the actual application.
Otherwise the results are worthless, since the performance of the client application is limited by the CPU consumption of the client.
And the load needs to be really high. Use wrk with like 16 requests pipelining (send 16 requests on a single connection one after another, using the pipeline feature). This will create significantly higher load on the web application, making CPU even more important and there should be significant differences between async and sync serializers
Such a Hater, it may be invalid for certain cases, but he states his bench-marking methods and adds the caveats of not running them in parallel and not on different machines. He is testing the libraries speed synchronously, we don't all use separate machines, e.g. I have a whole system of micro services communicating on one machine via rest api. So its very valid for myself.
That's the point dude. If you want to test the library you have to test it in exact that way, otherwise you also measure performance if the client.
On top of that, to add more inaccuracy to that "so-called benchmark", you would need to run it as low down the pipeline as possible, i.e. as a middleware. w/o all the other default middlewares (MVC, Routing) eating up most of the CPU cycles.
The point is, the benchmark is inaccurate, because the faster a library is, the more it's "slowed down" by the client running on the same computer. And the fact of it not being benchmarked long enough (at least 5 to 10 minutes) just doesn't benchmark the GC overhead caused by some libraries.
Hence it neither measures the real performance of the library, neither it does measure real world performance (due to the peak being to short and client slowing down server by canibalizing its CPU power).
The Benchmark is invalid for every case. People finally need to learn how to benchmark.
Also its not stated how the other benchmarks were done or even if Benchmark.NET was used. Its well know that most benchmarks are not closely accurate if they are not performed correctly and as seen in this case a lot of people don't know how to perform benchmark correctly in the first place.
As well important, to test it with payload > 85kb, this would seriously impact libraries which require the whole string to be a continues buffer, since then the located buffer would directly land on the LOH (Large Object Heap), which means its collected way less and on a load test which runs 5 mins the LOH would be filled quickly by such an test on Libraries which dont manage memory well, which results in gen2 garbage collections which means that the GC would have to collect (and compact) a very large area of memory (vs gen0 and gen1 spaces which are much smaller), requiring a lot more CPU cycles and halting the application for much longer time.
This would change the results by a big for all libraries which have a lot of allocations or require the input to be reallocated in a buffer (and hence not allowing to decode the file as its steamed)
it depends on what youre going for..
If the numbers are important to you, to get absolutes, then yes, you would need the rig you describe, and need a parallel test etc. However, this is simply about comparisons, comparing one serializer to another under the same conditions using the same method. It's valid, just not for the actual numbers that are produced, only the relative performance.
Its not valid, when you want to use it to decide which one is best for your production application, cause the loads there are not like 1 second.
And in times of free or cheap Cloud offerings, theres no excuse for not using properly two rigs. Spinning up two instances of app service (or appservice + VM for the command like wrk client) is a matter of minutes and even on bigger VMs,just costs a couple of cents to run the test for 15 minutes or so
We eagerly await your more valid testing results. Please post a link when you're complete.
thanks, for good article!