Java ExecutorService线程池触发的Full GC问题排查(二)

2014-11-23 23:38:03 · 作者: · 浏览: 1
start] 2014-03-31T08:28:49.776+0800: 210168.304: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 2014-03-31T08:28:49.851+0800: 210168.379: [GC 210168.379: [ParNew: 755007K->755007K(755008K), 0.0000420 secs]210168.379: [CMS: 3355177K->3355176K(3355456K), 0.2814320 secs] 4110185K->4110177K(4110464K), [CMS Perm : 40876K->4087 6K(98304K)], 0.2817730 secs] [Times: user=0.29 sys=0.00, real=0.28 secs] 2014-03-31T08:28:50.134+0800: 210168.662: [GC [1 CMS-initial-mark: 3355176K(3355456K)] 4110182K(4110464K), 0.0070090 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 2014-03-31T08:28:50.827+0800: 210169.355: [GC [1 CMS-initial-mark: 3355176K(3355456K)] 4110182K(4110464K), 0.0070100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 2014-03-31T08:28:50.834+0800: 210169.362: [CMS-concurrent-mark-start] 2014-03-31T08:28:50.884+0800: 210169.412: [CMS-concurrent-mark: 0.049/0.050 secs] [Times: user=0.25 sys=0.00, real=0.05 secs] 2014-03-31T08:28:50.884+0800: 210169.412: [CMS-concurrent-preclean-start] 2014-03-31T08:28:51.007+0800: 210169.534: [CMS-concurrent-preclean: 0.123/0.123 secs] [Times: user=0.12 sys=0.00, real=0.12 secs] 2014-03-31T08:28:51.007+0800: 210169.534: [CMS-concurrent-abortable-preclean-start] 2014-03-31T08:28:51.007+0800: 210169.534: [CMS-concurrent-abortable-preclean: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 2014-03-31T08:28:51.007+0800: 210169.535: [GC[YG occupancy: 755007 K (755008 K)]210169.535: [Rescan (parallel) , 0.0059270 secs]210169.541: [weak refs processing, 0.0000110 secs] [1 CMS-remark: 3355177K(3355456K)] 4110185K(4110 464K), 0.0060740 secs] [Times: user=0.07 sys=0.00, real=0.01 secs] 2014-03-31T08:28:51.014+0800: 210169.541: [CMS-concurrent-sweep-start] 2014-03-31T08:28:51.049+0800: 210169.577: [CMS-concurrent-sweep: 0.035/0.035 secs] [Times: user=0.04 sys=0.00, real=0.04 secs] 2014-03-31T08:28:51.049+0800: 210169.577: [CMS-concurrent-reset-start] 2014-03-31T08:28:51.059+0800: 210169.587: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]


发现FullGC几乎2s一次, 并且每次都回收不了多少内存,但是GC一直都在尝试回收,通过jstat看,展现的也是FullGC频繁,jstat -gccause显示Allocation Failure导致GC。jmap -heap 显示OLD区use已经98%,但是JVM并没有OOM并Crash。如果Crash,Hypervisor会重启进程,进程就会恢复并重做刚才丢失的数据。但每次回收都能腾出一部分内存,所以JVM并没有Crash掉(后面会将如何避免JVM这种策略)。


(这里有个小技巧,如果jstat jmap等长时间没响应然后打出来异常日志,可以带-F参数多试几次。)


用/usr/java/default/bin/jmap -histo 9974查看内存,发现当量的char[]和string在占用,但由于没有trace或者class信息,不能定位问题,于是决定用/usr/java/default/bin/jmap -dump:format=b,file=jvm.dump.hprof 9974将JVM的整个Heap做了个镜像(JVM在oom时候也会自动dump一个hprof文件,文件名规则java_pidxxx.hprof)。此镜像和现在的eden + from/to + old + perm的总和是一样大的:

[xxxxxxxxxxxxxxxxxxxxxxxx]$ du -sh jvm.dump.hprof 
4.0G    jvm.dump.hprof

起初,由于文件太大,并且是在线上remote server上,跨机房弄到本地比较慢。后来发现dump出来的hprof里面是大量的文本信息,就用tar压缩了一下,压缩后大小300+M,无奈公司的Heap分析不好用,就忍受了进20分钟,将hprof弄到了本地机器。用MAT(Eclipse MAT也可)分析了一下,注意如果hprof比较大,还要把MAT或者Eclipse的启动参数-Xmx最大堆调到比hprof大些才能分析。如下:


\

< http://www.2cto.com/kf/ware/vc/" target="_blank" class="keylink">vcD4KPHA+IAogICAgICAgICBFY2xpcHNlIE1BVLvh1rG907j4s/a9qNLpKExlYWsgc3VzcGVjdCmjrLT6wuvA71Rhc2tEaXNwYXRjaGVyKHNwcmluZ9eiyOu1xHNpbmdsZXRvbrWlwOAp08PBy7n9tuDE2rTmoaO+rbn9xuTL+82zvMajrFRvcENvbnN1bWVyo6xPYmplY3QgVHJlZbXIo6y++dOh1qTBy1Rhc2tEaXNwYXRjaGVy1bzTw8HLtPPBv8TatOajrNTaICAg1tDJ7s3a1eK49lRhc2tEaXNwYXRjaGVyxNq05tW808OjrLj619lzaGFsbG93tPPQoSg8ZW0+QNei0uKjrHNoYWxsb3cgaGVhcMrH1eK49rbUz/OzydSx0v3Tw9fUye26zdStyrzA4NDN1bzTw8TatOa089Cho6yyu7D8uqzS/dPDtb1IZWFwyc+1xKGjUmV0YWluZWQKIEhlYXDU8rD8wKi21M/zs8nUsbrNs8nUsda4z/K1xEhlYXDE2rTmPC9lbT4po6zX7tbVtqjOu7W9SmF2YcDgv+JFeGVjdXRvcnNTZXJ2aWNlz9+zzLPY1tC1xEJsb2NraW5nUXVldWXW0NPQNc3ytuDUqsvYo6zDv7j21KrL2MbkyrXKx9K7uPbI1da+tcRCdWZmZXKjrLTz0KHU2jEwMGvX89PSoaOzy8/CwLTL47TzuMW+zcrHNEehozwvcD4KPHA+PGJyPgo8L3A+CjxwPjxpbWcgc3JjPQ=="https://www.cppentry.com/upload_files/article/76/1_cqxys__.jpg" alt="">


ExecutorService不太熟悉的同学可以Google,也可以参考我之前的博客Taskqueue队列类似。BlockingQueue是解耦生产者和消费者异步的中间存储队列。在当前场景中,线程池的大小和并发度已经进行了控制,正常情况下,如果消费端正常执行,则BlockingQueue不会太大。消费者是一个存在socket io的程序。难道是网络读写发生了异常或者阻塞?

后来经日志确认,发现了大量socket exception,那个时候网络出现了长时间的断开。这个socket的读写超时又设置的比较大源,秒级,导致消费端消费BlockingQueue里面的数据速度远小于生产者,导致Queue越来越大。Queue里面的对象又都是强引用,则不断的触发FullGC


然后很多同学会好奇,为什么这么频繁的FullGC之后,为什么JVM不Crash呢?原因在于JVM有两个参数没设置:

-XX:+UseGCOverheadLimit 这个参数代表GC超过多长时间则放弃尝试,直接OOM

后续对代码进行了修改,手动构造new ThreadPoolExecutor(xxx,xxx,xxx,new LinkedBlockingDeque() )自己定义一个BlockingQueue的实例,在进一步,可以自定义ExecutorService的policy来制定如果队列太长了,所做的处理方式。默认的BlockingQueue最大元素是Integer.MAX_VALUE,满了的策略是直接Abort丢掉数据。