Core Java Sample Resumes
Core Java Sample Resume 0-2 Years Experience
3 years of experience in software development life cycle design, development, and support of systems application architecture.
· More than two years of experience in Hadoop Development/Administration built on six years of experience in Java Application Development.
Good knowledge of Hadoop ecosystem, HDFS, Big Data, RDBMS.
· Hands on Experience in working with ecosystems like Hive, Pig, Sqoop, Map Reduce, Flume, OoZie.
· Strong Knowledge of Hadoop and Hive and Hive's analytical functions.
· Capturing data from existing databases that provide SQL interfaces using Sqoop.
· Efficient in building hive, pig and map Reduce scripts.
· Implemented Proofs of Concept on Hadoop stack and different big data analytic tools, migration from different databases (i.e Teradata, Oracle,MYSQL ) to Hadoop.
· Successfully loaded files to Hive and HDFS from MongoDB, Cassandra, HBase
· Loaded the dataset into Hive for ETL Operation.
· Good knowledge on Hadoop Cluster architecture and monitoring the cluster.
· Experience in using DBvisualizer, Zoo keeper and cloudera Manager.
· Hands on experience in IDE tools like Eclipse, Visual Studio.
· Experience in database design using Stored Procedure, Functions, Triggers and strong experience in writing complex queries for DB2, SQL Server.
· Experience with Business Objects and SSRS, created Universe, developed many Crystal reports and webi reports.
· Excellent problem solving skills, high analytical skills, good communication and interpersonal skills.
February 2013 to Present
Install raw Hadoop and NoSQL applications and develop programs for sorting and analyzing data.
· Replaced default Derby metadata storage system for Hive with MySQL system. Executed queries using Hive and developed Map-Reduce jobs to analyze data.
· Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
· Developed the Pig UDF's to preprocess the data for analysis.
· Developed Hive queries for the analysts.
environment by Hortonworks.
· Involved in loading data from LINUX and UNIX file system to HDFS.
· Supported in setting up QA environment and updating configurations for implementing scripts with Pig.
· Environment: Core Java, Apache Hadoop (Horton works), HDFS, Pig, Hive, Cassandra, Shell Scripting, My Sql, LINUX, UNIX
March 2012 to January 2013
Import-export data into HDFS format, analyze V=Big data using Hadoop environment, Developed UDFs using Hive, Pig Latin and Java.
· Worked on analyzing Hadoop cluster and different big data analytic tools including Pig, Hbase NoSQL database and Sqoop.
· Importing and exporting data in HDFS and Hive using Sqoop.
· Extracted files from MongoDB through Sqoop and placed in HDFS and processed.
· Experience with NoSQL databases.
· Written Hive UDFS to extract data from staging tables.
· Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
· Familiarized with job scheduling using Fair Scheduler so that CPU time is well distributed amongst all the jobs.
· Managed Hadoop log files.
· Analyzed the web log data using the HiveQL.
· Environment: Java 6, Eclipse, Hadoop, Hive, Hbase, MangoDB, Linux, Map Reduce, HDFS, Shell Scripting, Mysql
Master of EngineeringJntu 2011
Programming Language: Java, C++, C, SQL, Python
Java Technologies: JDBC,JSP, Servlets
RDBMS/NoSQL: SQL server, DB2, HBase, Cassandra, MangoDB
Scripting: Shell Scripting
IDE: Eclipse, Netbeans
Operating Systems: Linux, UNIX, Windows 98/00/xp
Hadoop Ecosystem: Map Reduce, Sqoop, Hive, Pig, Hbase,Cassandra, HDFS, Zookeeper