In fair-scheduler.xml, there's a queue named "test_queue", whose configuration is as below:
After I deleting the settings, this queue is not removed from the YARN monitoring webpage, even though all the parameters(Min Resources, Max Resources, Fair Share) under "test_queue" is blank. I'm sure that the fair-scheduler.xml is reloaded correctly.
Then I check it up with command as follows, the queue state is running, just like all the other queues.
Curiously enough, I tested whether I can set this queue again to run my hive task, and it STILL CAN!
As you can see, the MaxResources of this queue bulges to 100% of total resource after deleting it from fair-scheduler.xml, anyone can "escape" the ACL and use the resources arbitrarily. Consequently, attention should be paid to this scenario.
The solution is to restart yarn service(stop-yarn.sh => start-yarn.sh), at the price of interfering all the ongoing tasks to fail. (If anyone have a better solution, please FYI by leaving a message!)
© 2014-2017 jason4zhu.blogspot.com All Rights Reserved
If transfering, please annotate the origin: Jason4Zhu
<queue name="test_queue"> <minResources>120000 mb, 600vcores</minResources> <maxResources>200000 mb, 720vcores</maxResources> <maxRunningApps>5</maxRunningApps> <weight>2.0</weight> <schedulingPolicy>fair</schedulingPolicy> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </queue>
After I deleting the settings, this queue is not removed from the YARN monitoring webpage, even though all the parameters(Min Resources, Max Resources, Fair Share) under "test_queue" is blank. I'm sure that the fair-scheduler.xml is reloaded correctly.
Then I check it up with command as follows, the queue state is running, just like all the other queues.
K11:/>hadoop queue -info root.test_queue DEPRECATED: Use of this script to execute mapred command is deprecated. Instead use the mapred command for it. 14/10/21 22:11:53 INFO client.RMProxy: Connecting to ResourceManager at server/ip:port ===================== Queue Name : root.test_queue Queue State : running Scheduling Info : Capacity: 0.0, MaximumCapacity: UNDEFINED, CurrentCapacity: 0.0
Curiously enough, I tested whether I can set this queue again to run my hive task, and it STILL CAN!
As you can see, the MaxResources of this queue bulges to 100% of total resource after deleting it from fair-scheduler.xml, anyone can "escape" the ACL and use the resources arbitrarily. Consequently, attention should be paid to this scenario.
The solution is to restart yarn service(stop-yarn.sh => start-yarn.sh), at the price of interfering all the ongoing tasks to fail. (If anyone have a better solution, please FYI by leaving a message!)
© 2014-2017 jason4zhu.blogspot.com All Rights Reserved
If transfering, please annotate the origin: Jason4Zhu
Too Good article,Thank you for sharing this awesome Blog.
ReplyDeleteKeep Updating more....
Big Data Hadoop Training