All about about HTTP Compression techniques and their benefits...
IBM Maximo best practices tech notes talks briefly about HTTP compression techniques. This article is an attempt to explore the topic in detail and summarize them to take benefit out of it.
Welcome to another article folks ! Today we will talk about something interesting and good to learn- “ HTTP Compression”.
HTTP Compression on the Maximo side can be done in two ways:
At the application side- HTTP Server level
At the load balancing side- BIG-IP F5
Introduction:
HTTP compression is a technique used to reduce the amount of data that is sent between a web server and a client's web browser. When a web browser requests a web page or other resource from a web server using the HTTP protocol, the server sends the data in the form of text-based files such as HTML, CSS, JavaScript, or JSON.
HTTP compression works by compressing these text-based files using a compression algorithm before they are sent from the server to the client's web browser. This reduces the amount of data that needs to be transferred over the network, which can result in faster page load times and reduced bandwidth usage.
The most commonly used HTTP compression algorithms are gzip, deflate, and brotli. These algorithms work by compressing the text-based files into a smaller size, which can then be decompressed by the client's web browser before being displayed on the screen.
HTTP compression is supported by most modern web browsers and web servers, and can be enabled using server-side configuration. However, it's important to note that not all types of files are suitable for compression, and in some cases, compressing certain types of files can actually increase their size. Therefore, it's important to test the performance and compatibility of your web applications before and after enabling HTTP compression.
HTTP Compression in the Context of WebSphere/IBM HTTP Server:
WebSphere provides built-in support for HTTP compression using the gzip compression algorithm. This compression algorithm is widely supported by modern web browsers and can be configured in the WebSphere server to compress various types of content, such as HTML, CSS, JavaScript, and XML.
To enable HTTP compression in WebSphere, you can use the WebSphere Administrative Console to configure compression settings for specific web applications. You can also enable compression at the server level by configuring the HTTP server plug-in that routes requests to WebSphere.
It's worth noting that while HTTP compression can provide significant performance benefits, it can also have some trade-offs, such as increased CPU usage on the server side and potential compatibility issues with older web browsers that don't support compression. Therefore, it's important to test the performance and compatibility of your web applications before and after enabling HTTP compression in WebSphere.
As well WebSphere supports both gzip, deflate techniques and the latest versions of WebSphere supports brotli as well.
Hope you liked it so far … !
What we have seen so far is to enable HTTP compression at the application server level. However, there is another option to enable it at load balance level.
Below is an illustration of the high level steps you can follow to enable in load balancer level : [Source: Internet]
Please refer to the official documentation for the accuracy of the information- the below steps are more to get an idea and a technical understanding of the concept.
Log in to the Big IP F5 web interface and navigate to Local Traffic > Profiles > HTTP Compression.
Click the Create button to create a new HTTP Compression profile.
In the General Properties section, enter a name for the profile and select the appropriate compression level. The compression level determines the amount of compression applied to the HTTP response. You can choose from three different compression levels: Low, Medium, and High.
In the Compression Content section, you can choose which types of content to compress. You can select specific MIME types or choose to compress all content types.
In the Buffer Size section, you can specify the maximum buffer size for compressed data. This setting determines the maximum size of data that can be compressed at one time.
In the HTTP Header section, you can choose to add or remove specific HTTP headers from the compressed response.
Click the Finished button to save the HTTP Compression profile.
Once the HTTP Compression profile is created, you can assign it to a virtual server or a pool.
You can refer to Page 42 of the IBM documentation on Maximo best practices for better performance to know more about how to enable them.
General factors to be considered for HTTP Compression:
Content type: HTTP compression can only be applied to certain types of content. It is important to ensure that the content being compressed is compatible with the chosen compression algorithm.
Compression level: Different compression algorithms have different compression levels, which determine the amount of compression applied to the data. It is important to choose a compression level that balances the level of compression with the amount of CPU resources required to compress and decompress the data.
Eg: IBM HTTP Server documentation suggest mod_deflate compression level between 3 or 6.
Buffer size: HTTP compression involves buffering the data before compressing it. The buffer size determines the amount of data that can be compressed at once. Choosing an appropriate buffer size is important to ensure that the compression process is efficient and doesn't consume too much memory.
CPU and memory usage: Enabling HTTP compression can increase CPU and memory usage on the server. It is important to ensure that the server has sufficient resources to handle the additional load.
Content encoding: HTTP compression modifies the content being sent over the network, so it is important to ensure that the client's web browser is capable of decompressing the content. This is typically done using the Accept-Encoding header in the HTTP request.
What is MAX-AGE cache setting and it can be enabled for Maximo:
MAX-AGE) is an attribute that can be used in HTTP headers to specify the maximum amount of time, in seconds, that a browser or intermediate cache should keep a cached response fresh. When a browser or cache receives a response with a "Cache-Control: max-age" header, it will store the response in its cache and use it for subsequent requests until the max-age value is exceeded.
For example, if a server sets the "Cache-Control: max-age=3600" header in a response, it is telling the browser or cache to keep the response in its cache for up to 1 hour (3600 seconds). After 1 hour has elapsed, the cached response will be considered stale and a new request will be made to the server to fetch a fresh response.
The use of the (MAX-AGE) header can help to reduce the number of requests made to a server and improve website performance by reducing network traffic. By caching responses in the browser or intermediate caches, subsequent requests can be served more quickly and with less data transfer, resulting in faster page load times and better user experience.
<maximo_root>/applications/maximo/maximouiweb/webmodule/WEB-INF/web.xml
<filter>
<filter-name>HttpMaxAgeFilter</filter-name>
<filter-class>psdi.webclient.system.filter.HttpMaxAgeFilter</filter-class>
<init-param>
<param-name>Cache-Control</param-name>
<param-value>max-age=2764800</param-value>
</init-param>
<init-param>
<param-name>Pragma</param-name>
<param-value>max-age=2764800</param-value>
</init-param>
</filter>
Note for Maximo Developers:
I hope you would have got a decent insight into HTTP Compression techniques and their benefits.
Please use these with atmost care, as they could potentially increase or decrease the performance based on how you set it up.
Following IBM recommendations and documentations and test it in your quality environment before enabling these in Production systems.
Perform load tests in your quality environment to baseline your application performance before and after the HTTP compression is enabled.
Rollback to original settings if you feel its not working as per your expectations.
Validate the tests in all browsers used by your client to ensure that it works across the board.
References & Due Credits:
Bonus Tips:
Different compression algorithms have different compression levels that determine the amount of compression applied to the data being compressed. Here's a brief overview of the compression levels for some of the most commonly used HTTP compression algorithms:
gzip: gzip is a popular compression algorithm used for HTTP compression. It offers three levels of compression: low, medium, and high. The low level of compression results in faster compression and decompression times, but less compression. The high level of compression results in more compression, but slower compression and decompression times.
deflate: deflate is another commonly used compression algorithm for HTTP compression. It offers a single compression level that is typically less efficient than gzip.
Brotli: Brotli is a newer compression algorithm that offers higher compression ratios than gzip or deflate. Brotli compression has 11 compression levels, with higher levels offering more compression but requiring more CPU resources.
In general, higher compression levels result in smaller file sizes, but require more CPU resources to compress and decompress the data. It is important to choose an appropriate compression level based on the available resources and the desired level of compression.
I want to take a moment to thank you for the overwhelming response to my previous article. Your engagement and feedback are greatly appreciated. If you haven't already, please consider subscribing to this blog to stay up-to-date on my technical articles. Thank you for your support!