BigData & Hadoop Adminstration Course

Understanding Big Data and Hadoop

Learning Objectives - In this module, you will understand what is big data and Apache Hadoop. You will also learn how Hadoop solves the big data problems, about Hadoop cluster architecture, its core components & ecosystem, Hadoop data loading & reading mechanism and role of a Hadoop cluster administrator.

Topics - Introduction to big data, limitations of existing solutions, Hadoop architecture, Hadoop components and ecosystem, data loading & reading from HDFS, replication rules, rack awareness theory, Hadoop cluster administrator: Roles and responsibilities.

Hadoop Architecture and Cluster setup

Learning Objectives - In this module, you will understand different Hadoop components, understand working of HDFS, Hadoop cluster modes, configuration files, and more. You will also understand the Hadoop 1.0 cluster setup and configuration, setting up Hadoop Clients using Hadoop 1.0 and resolve problems simulated from real-time environment.

Topics - Hadoop server roles and their usage, Hadoop installation and initial configuration, deploying Hadoop in a pseudo-distributed mode, deploying a multi-node Hadoop cluster, Installing Hadoop Clients, understanding working of HDFS and resolving simulated problems.

Hadoop Cluster Administration & Understanding MapReduce

Learning Objectives – In this module you will understand the working of the secondary namenode,working with Hadoop distributed cluster, enabling rack awareness, maintenance mode of Hadoop cluster, adding or removing nodes to your cluster in adhoc and recommended way, understand MapReduce programming model in context of Hadoop administrator and schedulers.

Topics - Understanding secondary namenode, working with Hadoop distributed cluster, Decommissioning or commissioning of nodes, understanding MapReduce, understanding schedulers and enabling them.

Backup, Recovery and Maintenance

Learning Objectives - In this module, you will understand day to day cluster administration tasks, balancing data in cluster, protecting data by enabling trash, attempting a manual failover, creating backup within or across clusters, safe guarding your metadata and doing metadata recovery or manual failover of NameNode recovery, learn how to restrict the usage of HDFS in terms of count and volume of data, and more.

Topics – Key admin commands like Balancer, Trash, Import Check Point, Distcp, data backup and recovery, enabling trash, namespace count quota or space quota, manual failover or metadata recovery.

Hadoop 2.0 Cluster: Planning and Management

Learning Objectives - In this module, you will gather insights around cluster planning and management, learn about the various aspects one needs to remember while planning a setup of a new cluster, capacity sizing, understanding recommendations and comparing different distributions of Hadoop, understanding workload and usage patterns and some examples from world of big data.

Topics - Planning a Hadoop 2.0 cluster, cluster sizing, hardware, network and software considerations, popular Hadoop distributions, workload and usage patterns, industry recommendations.

Hadoop 2.0 and it's features

Learning Objectives - In this module, you will learn more about new features of Hadoop 2.0, HDFS High Availability, YARN framework and job execution flow, MRv2, federation, limitations of Hadoop 1.x and setting up Hadoop 2.0 Cluster setup in pseudo-distributed and distributed mode.

Topics – Limitations of Hadoop 1.x, features of Hadoop 2.0, YARN framework, MRv2, Hadoop high

Setting up Hadoop 2.X with High Availability and upgrading Hadoop

Learning Objectives - In this module, you will learn to setup Hadoop 2 with high availability, upgrading from v1 to v2, importing data from RDBMS into HDFS, understand why Oozie, Hive and Hbase are used and working of the components.

Topics – Configuring Hadoop 2 with high availability, upgrading to Hadoop 2, working with Sqoop, understanding Oozie, working with Hive, working with Hbase.

availability and federation, yarn ecosystem and Hadoop 2.0 Cluster setup.

Project: Cloudera manager and Cluster setup, Overview on Kerberos

Learning Objectives - In this module, you will learn about Cloudera manager to setup Cluster, optimisations of Hadoop/Hbase/Hive performance parameters and understand basics on Kerberos. You will learn to setup Pig to use in local/distributed mode to perform data analytics.

Topics - Cloudera manager and cluster setup,Hive administration, HBase architecture, HBase setup, Hadoop/Hive/Hbase performance optimization, Pig setup and working with grunt, why Kerberos and how it helps.

Cloudera’s CDH (Installation, Management, Monitoring using Cloudera Manager)

POC (proof of concept), Profile Upgradation, Interview Questions and Certification guidance.

Cloudera & Horton works Certification Assistance

Duration: 80 Hours (along with Hands-on)

Mail us @ info@jpasolutions.in for complete course content.

Big Data

Every Saturday @ 10am in JPA Solutions by Real time Cloudera Certified Professionals. Only limited participants, So kindly confirm, If you are interested to join this session

For more details, Reach us @ 8754415111 / 044-45002120