A series of articles on this topic:
The Java runtime environment has a built-in garbage collection (GC) module. Many programming languages of the previous generation did not have an automatic memory recovery mechanism, and programmers were required to manually write code to allocate and release memory to reuse heap memory.
In a Java program, you only need to care about memory allocation. If a block of memory is no longer used, the Garbage Collection module will automatically clean it up. For the detailed principle of GC, please refer to the GC performance optimization series of articles. Generally speaking, the garbage collection algorithm built in JVM can handle most business scenarios.
java.lang.OutOfMemoryError: GC overhead limit exceeded The reason this happens is that the program basically uses up all the available memory, and the GC can't clean it up.
Cause Analysis:
JVM throws java.lang.OutOfMemoryError: GC overhead limit exceeded error is a signal: the proportion of time to perform garbage collection is too large, and the amount of effective calculation is too small. By default, if the time spent in GC exceeds 98%, And if the memory reclaimed by GC is less than 2%, the JVM will throw this error.
Note that the java.lang.OutOfMemoryError: GC overhead limit exceeded error is only thrown in extreme cases where less than 2% of the GC has been recycled for multiple consecutive times. What happens if the GC overhead limit error is not thrown? That is, the memory cleaned up by the GC will fill up again soon, forcing the GC to execute again. This forms a vicious circle, the CPU usage is always 100%, and the GC But there is no result. System users will see that the system freezes-an operation that used to take only a few milliseconds, now takes several minutes to complete.
This is also a good case of the fast failure principle.
Example:The following code adds data to the Map in an infinite loop. This will result in a "GC overhead limit exceeded" error:
public class OOM {
static final int SIZE=2*1024*1024;
public static void main(String[] a) {
int[] i = new int[SIZE];
}
}
Configure JVM parameters: -Xmx12m. The error message generated during execution is as follows:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Hashtable.addEntry(Hashtable.java:435)
at java.util.Hashtable.put(Hashtable.java:476)
at com.cncounter.rtime.TestWrapper.main(TestWrapper.java:11)
The error message you encounter is not necessarily the same. Indeed, the JVM parameters we executed are:
java -Xmx12m -XX:+UseParallelGC TestWrapper
Soon I saw the java.lang.OutOfMemoryError: GC overhead limit exceeded error message. But in fact, this example is a bit tricky. Because of different heap memory sizes, different GC algorithms are used, the error messages generated are different. For example, when the Java heap memory is set to 10M:
java -Xmx10m -XX:+UseParallelGC TestWrapper
The error message in DEBUG mode is as follows:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Hashtable.rehash(Hashtable.java:401)
at java.util.Hashtable.addEntry(Hashtable.java:425)
at java.util.Hashtable.put(Hashtable.java:476)
at com.cncounter.rtime.TestWrapper.main(TestWrapper.java:11)
Try to modify the parameters, execute to see the specifics. The error message and stack information may be different.
The java.lang.OutOfMemoryError: Java heap space error message is thrown when the Map is rehashing. If you use other garbage collection algorithms, such as -XX:+UseConcMarkSweepGC, or -XX:+UseG1GC, the error will be caused by the default exception handler Capture, but there is no stacktrace information, because there is no way to fill the stacktrace information when creating an Exception.
For example configuration:
-Xmx12m -XX:+UseG1GC
Running in Win7x64, Java8 environment, the error message generated is:
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main"
It is recommended that readers modify the memory configuration and test the garbage collection algorithm.
These real cases show that under resource constraints, it is impossible to accurately predict which specific reasons a program will die of. Therefore, in the face of such errors, a specific error processing sequence cannot be bound.
solution:
There is a solution to deal with the problem, just don't want to throw "java.lang.OutOfMemoryError: GC overhead limit exceeded" error message, then add the following startup parameters
// not recommendation
-XX:-UseGCOverheadLimit
We strongly recommend not to specify this option: because it does not really solve the problem, it can only postpone the occurrence of out of memory errors a little bit, and other processing must be done at the end. Specifying this option will cover up the original java.lang.OutOfMemoryError: GC overhead limit exceeded error and turn it into a more common java.lang.OutOfMemoryError: Java heap space error message.
Note: Sometimes the cause of the GC overhead limit error is caused by insufficient heap memory allocated to the JVM. In this case, you only need to increase the heap memory size.
In most cases, increasing the heap memory does not solve the problem. For example, there is a memory leak in the program, and increasing the heap memory can only delay the time of java.lang.OutOfMemoryError: Java heap space error.
Of course, increasing the heap memory may also increase the time of GC pauses, thereby affecting the throughput or delay of the program.
If you want to solve the problem fundamentally, you need to troubleshoot the code related to memory allocation. Simply put, you need to answer the following questions:
Which type of object takes up the most memory?
These objects are allocated in which part of the code.
To figure this out, it may take several days. The following is the general process:
Obtain the permission to perform a heap dump on the production server. "Dump" is a snapshot of the heap memory, which can be used for subsequent memory analysis. These snapshots may contain confidential information, such as passwords, credit card accounts, etc., so sometimes, due to corporate security restrictions, it is necessary to obtain the production environment Heap dumping is not easy.
Perform a heap dump at the appropriate time. Generally speaking, memory analysis needs to compare multiple heap dump files. If the timing is not right, it may be a "scrap" snapshot. In addition, every time a heap dump is executed, the JVM will be "freeze" ", so in the production environment, you can't perform too many dump operations, otherwise the system is slow or stuck, and your troubles will be big.
Use another machine to load the Dump file. If the JVM memory in question is 8GB, the machine memory for analyzing Heap Dump generally needs to be greater than 8GB. Then open the dump analysis software (we recommend Eclipse MAT, of course, you can also use other tools).
Detect the GC roots that occupy the most memory in the snapshot. For details, please refer to: Solving OutOfMemoryError (part 6) – Dump is not a waste. This may be a bit difficult for novices, but it will also deepen your understanding of the heap memory structure and navigation mechanism.
Next, find out the code that may allocate a large number of objects. If you are very familiar with the entire system, you may be able to locate the problem very quickly. If you are unlucky, you will have to work overtime to investigate.