Last month the Akamas team had the pleasure to join 7000+ attendees as Silver sponsor of the KubeCon + CloudNativeCon North America 2022 conference organized by the Cloud Native Foundation (CNCF) in Detroit. What a great time we had, meeting
Notice: An abridged version of this interview has been published as Akamas contributed content by TFIR for the KubeCon+CloudNativeCon NA 2022 in Detroit, October 24-28 Stefano Doni, CTO at Kubernetes optimization company Akamas, and long-time CMG contributor and Best Paper
Today developers are often spending more time on managing Kubernetes than focusing on developing applications running on Kubernetes. This situation is exacerbated by the shortage of Kubernetes skills and by the complexity of developing and delivering well-tuned applications in Kubernetes
Our notes from KubeCon + CloudNativeCon in Valencia KubeCon + CloudNativeCon Europe 2022 – what a fantastic event! I’m sure my feeling is shared by many among the 7,000 in-person attendees and 10,000 that followed from home. We had the pleasure
This blog is co-authored by Kyle McMeekin, Head of Channel at Gremlin. Today’s enterprises are struggling to cope with the complexities of their environments, technologies, and applications. On top of these challenges, they face faster release rates, and the need
Many companies delivering services based on applications running on cloud face much higher costs than expected. The problem is that over-provisioning is too often the approach taken to minimize risks, in particular when development and release cycles are getting shorter
Organizations across the world are fast adopting Kubernetes. That is because Kubernetes provides several benefits from a performance perspective. Its ability to densely schedule containers into the underlying machines translates to low infrastructure costs. It prevents a runaway container from impacting
The benefits of Kubernetes from a performance perspective are undisputable. Let’s just consider the efficiency provided by Kubernetes, thanks to its ability to densely schedule containers into the underlying machines, which translates to low infrastructure costs. Or the mechanisms available to isolate
Several cloud cost optimization solutions are today available both by Cloud Providers, such as AWS Compute Optimizer or Google machine type recommendations, and by specialized COTS vendors. These tools may help you choosing the right cloud instance and volume sizes, allocating resources in
Big data applications often offer relevant opportunities for gains both in terms of performance and of cost reduction. Typically, the underlying infrastructure – whether on-premise or on cloud – is both inefficient and over-provisioned to ensure a good performance vs
The Java platform continues to be developed and improved over time. The OpenJDK community has been quite active in improving the performance of the JVM and the garbage collector (GC): new GCs are being developed and existing ones are constantly
Developers and system owners usually take for granted that there are some intrinsic tradeoffs in the Java design. For example, it is commonly accepted that if you aim at reducing resource usage (e.g. CPU), you must accept some performance degradation.
One of the most established approaches to improve Java application performance is to tune JVM options, in particular Garbage Collector (GC) parameters. Indeed, the main task of garbage collection is to free memory up which requires stopping the application threads
Experience the benefits of Akamas AI-powered optimization. No strings attached, no commitments, no sales calls.