Thursday, May 3, 2018

Hive query(只select,不insert table/partition)时产生大量小文件问题定位思路

行一个简单的select+filter+limit语句,因为filter中带non-partition field, 所以会启动MR(相关参考: Hive带non-parttiion-filter的query自动转化为local FetchTask问题)
在hive终端,发现在MR job执行succeed之后(yarn观察也是succeed),会一直卡在INFO: OK日志后,好像拉取结果时卡住了。
通过jstack打印线程栈如下,卡在了从NameNode拉取DFS文件元信息。
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at org.apache.hadoop.ipc.Client.call(Client.java:1463)
- locked <0x00000003377bf118> (a org.apache.hadoop.ipc.Client$Call)
at org.apache.hadoop.ipc.Client.call(Client.java:1409)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy29.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy30.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1279)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1266)
at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1324)
at org.apache.hadoop.hdfs.DistributedFileSystem$2.doCall(DistributedFileSystem.java:237)
at org.apache.hadoop.hdfs.DistributedFileSystem$2.doCall(DistributedFileSystem.java:233)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:233)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:224)
at org.apache.hadoop.fs.FilterFileSystem.getFileBlockLocations(FilterFileSystem.java:148)
at org.apache.hadoop.fs.viewfs.ChRootedFileSystem.getFileBlockLocations(ChRootedFileSystem.java:211)
at org.apache.hadoop.fs.viewfs.ViewFileSystem.getFileBlockLocations(ViewFileSystem.java:330)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1776)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1759)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:270)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:45)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextSplits(FetchOperator.java:372)
at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:304)
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:459)
at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:428)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:147)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2213)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:253)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
观察发现mr job一共启动了30k个mapper,0个reducer。相应的,会产生30k个小文件,NameNode压力会很大。
如此发现,即使开启了hive.merge.mapfileshive.merge.mapredfiles(相关参考:Hive的mr作业产生很多小文件或空文件的解决方案),对于这种select的query,小文件依旧很多。
解决思路有二: 
1. 在上述语句后加order by,字段随意。这样会产生一个reducer,不会去读所有的小文件元信息。 
2. 将所有这种select语句都转换为写temporary table,这样小文件会根据上述两个参数自动合并,再从这个table里select即可。

No comments:

Post a Comment