Big Data with Spark - Self Paced Course
|Self Paced Classes|
(20% off - Early Bird) (Incl. Taxes)
|Upcoming Live Courses|
As the data has grown bigger, more complex requirements or rate at which you are processing data has higher, the traditional tools are no longer able to handle the data because a single computer does not suffice because of IO, CPU & RAM limitaton.
That is when the new generation of tools which run on multiple computers are required. And Apache spark is probably the fastest and most efficient amongst all other distributed computing tools.
This course will take you from the very basics to an advanced level in Big Data Analysis and Streaming processing using Apache Spark. It will be a very hands on Session.
We will start with the basics of Big Data and understand the architecture of Apache Spark and would solve problems using python on Spark. Then we will cover Spark SQL and basics of Machine Learning and solve few classic problems using MLLib.
Then we will understand how to process the fast moving data. Or in other words, we will do the stream processing using Spark.
At the end of this course, you should be able to do be able handle all three kinds of processing of Big Data:
- Volume (Spark SQL)
- Velocity (Spark Streaming)
- Variety (MLLib)
Why learn Big Data with Apache Spark?
Almost every organisation has humongous data that has to be analysed for business growth/increase sales or to improve customer service. So Data Analysis is the hot trend in every industry.
The other way to measure the demand for Data Analysts is to look at the number of jobs being posted around the world on these technologies.
Our classes are conducted live online by our instructors via webinar or hangout. These are not pre-recorded classes. The instructor delivers the class using presentations, collaborative drawing tools, screenshares. All attendees are usually muted during the class. However, they can ask questions in the webinar or hangout chat windows. The instructor answers any questions asked immediately after explaining a concept. The instructor also asks questions during the sessions to ensure maximum student engagement.
Every class is recorded, complete with the screen and the audio, and uploaded to the Learning Management System which is accessible to our attendees for life.
At the end of each session, assignments are provided which the attendees have to submit in the LMS (Learning Management System). The assignments are continuously reviewed by our instructors and teaching assistants. In case we conclude that an attendee requires extra detailing, we schedule extra one-on-one sessions with that attendee.
What makes Big Data with Apache Spark course unique?
- Interactive Classes: More Questions. Less Lectures.
- Simple explanations to complex topics by industry experts
- Hands on workshops and real time projects.
- Quizzes & Assignments
- Certificate of Course at the end of course
- A real time project involving Big Data with Apache Spark
- Lifetime access to course content
CloudxLab™- Access to the cloud infrastructure if learners don't wish to install Apache Spark on their computers
What are the prerequisites to join Big Data with Apache Spark course?
To be able to take maximum benefit out of this course, you should have knowledge of the following:
- Basics Of SQL. You should know the basics of SQL and databases.
- A know-how of the basics of programming.We will be providing video classes covering the basics of Python. What is expected of the attendee is the ability to create a directory and see whats inside a file from the command line, and an understanding of 'loops' in any programming language.
In addition, the attendee should have the following hardware infrastructure:
- A good internet connection. An internet speed of 2mbps is good enough.
- Access to a computer. Since it is an online course, you would have to install webinar or hangout on your computer.
- Nice To Have: A power backup for your router as well as computer.
- Nice To Have: A good quality headphones.
What kind of project / real time experience?
After all sessions are over, we ask for the student's preference for a project. We form teams of 3-4 members and based on their interests we assign a project to each team. A project is usually of three weeks duration. If a team has an idea it wants to work on as a project, we screen the idea and the team can work on it, or we assign a project from the industry. Since it is not possible to provide real data from the industry, we provide data anonymously for projects. We continuously support and guide the teams during projects by conducting regular scheduled meetings and also provide individual assistance.
The projects assigned can also be based on public databases. There are various datasets available for free that can be found on any of the following websites:
A few examples of projects are as follows:
- Understanding the trends and patterns in BitCoin transaction graphs by qualitative analysis. BitCoin is a virtual currency. The way a coin is mined is based on transaction logs. BitCoin transaction logs keep growing almost every mili second, and therefore, processing these transaction logs is a real challenge.
- Understanding the correlation between the temperature of various cities and the stock market.
- Processing Apache Log for ERRORs. Preparing web analytics based on apache weblogs:
- Which services are slow
- Which services have a high number of users
- What is the failure rate of each service
- Preparing recommendations based on the apache logs.
- Using social media to compare a brand's marketing campaigns. The testing is basically done using sentiment analysis.
Associate Manager Technology at Thomson ReutersLinkedIn Profile
VP - Engineering at CommonFloorLinkedIn Profile
Data Scientist at Data SemanticsLinkedIn Profile
Consultant - Smart Grid Communications at Essel Vidyut Vitaran Nigam
PhD, Wireless Mesh NetworksLinkedIn Profile
Director Engineering, Target Technology ServicesLinkedIn Profile
Dr. Makhan Virdi
Researcher, NASA - DAACLinkedIn Profile
Big Data with Apache Spark: This is not a typical (online) classroom course. It is not just a series of videos with one way flow of information. Instead, it is a highly interactive setting where the instructor shares insightful details when any question/doubt is raised during the lecture. Sandeep passionately teaches complicated concepts in easy to understand language, supported with good analogies and effective examples. The course is well structured, covering the concepts of Big Data in width and depth. I am currently half-way through the course and I am already working on translating the concepts learned in the class to real world problems.
Hari Madhav Purwar
Core Java DeveloperLinkedIn Profile
Review: I have been having difficulty understanding, concepts behind big data technology, being questioned around why we need big data, when we need it ?. Answers to these basic questions and then understanding different tools to handle such cases. Sandeep methodology, start from base and build learning on that base, helps in architect the problem towards right solutions.
Thanks Sandeep for such a wonderful sessions :)
Sr Oracle DBA at SCDLLinkedIn Profile
See More Reviews at FaceBook Page.
Big Data with Apache Spark Introduction Video
- What Is Apache Spark?
- A Unified Stack
- Who Uses Spark, and for What?
- A Brief History of Spark
- Spark Versions and Releases
- Storage Layers for Spark
- Downloading Spark
- Introduction to Spark’s Python and Scala Shells
- Introduction to Core Spark Concepts
- Standalone Applications
- RDD Basics
- Creating RDDs
- RDD Operations
- Passing Functions to Spark
- Common Transformations and Actions
- Persistence (Caching)
- Creating Pair RDDs
- Transformations on Pair RDDs
- Actions Available on Pair RDDs
- Data Partitioning (Advanced)
- File Formats
- Structured Data with Spark SQL
- Broadcast Variables
- Working on a Per-Partition Basis
- Piping to External Programs
- Numeric RDD Operations
- Spark Runtime Architecture
- Deploying Applications with spark-submit
- Packaging Your Code and Dependencies
- Scheduling Within and Between Spark Applications
- Cluster Managers
- Which Cluster Manager to Use?
- Configuring Spark with SparkConf
- Components of Execution: Jobs, Tasks, and Stages
- Finding Information
- Key Performance Considerations
- Scheduling Within and Between Spark Applications
- Linking with Spark SQL
- Using Spark SQL in Applications
- Loading and Saving Data
- JDBC/ODBC Server
- User-Defined Functions
- Spark SQL Performance
- A Simple Example
- Architecture and Abstraction
- Output Operations
- Input Sources
- 24/7 Operation
- Streaming UI
- Performance Considerations
- Kafka Basics
- System Requirements
- Machine Learning Basics
- Data Types
- Tips and Performance Considerations
- Pipeline API
What Certificate do we provide?
Based on your performance in Quizzes, Assignments and Projects, we provide the certificate in the following forms:
- 1. Hard Copy
We send a hard copy of the certificate to your address.
- 2. Digitally Signed Copy
We provide the PDF of the certificate that is digitally signed by KnowBigData.com.
- 3. Share Your Success
Share your course record with employers and educational institutions through a secure permanent URL.
- 4. LinkedIn Recommendation & Endorsements
We will provide a LinkedIn Recommendation based on your performance. Also, we will endorse you with tags such as Hadoop, Big Data.
- 5. Verifiable Certificate
We have provided an online form to validate whether the certificate is correct or not here. This assists recruiters to verify the certificate provided by us.
About the Team
Founder & Chief Instructor
Past Amazon.com, InMobi.com, Founder @ tBits Global, D.E.Shaw
Education Indian Institute of Technology, Roorkee
For last 12 years, Sandeep has been building products and churning large amounts of data for various product firms. He has an all around experience of software development and big data analysis.
Apart from digging data and technologies, Sandeep enjoys conducting interviews and explaining difficult concepts in simple ways.
Big Data with Apache Spark - Frequently Asked Questions
Yes. Java is not required for understanding this course. We will be covering Spark's Python for programming in Spark. So, if you qualify the following three criteria, :
- Basics Of SQL. You should know the basics of SQL and databases. If you know about filters in SQL, you are expected to understand the course.
- A know-how of the basics of programming. If you understand 'loops' in any programming language, and if you are able to create a directory and see whats inside a file from the command line, you are good to get the concepts of this course even if you have not really touched programming for the last 10 years! In addition, we will be providing video classes on the basics of Python.
No. We stopped classroom trainings a while back when we realized that our students attending the online instructor led classes are performing better in the assignments than students in our offline classrooms. Moreover, students ask more questions in online sessions in comparison to the classroom sessions.
Also, it is very difficult to get a real training locally in any city. So, it is better to have really good training than having the classroom sessions.
To check if the online session would work for you, please attend our demo sessions. I can assure you that you would like the instructor led online trainings.
- Using the our CloudxLabTo give our candidates a real experience of big data computing, we have provided a bunch of computers with all the big data technologies running on them since most of the big data technologies make sense only if done using multiple machines. You only have to use SSH Client (putty on windows) to connect to our cluster. Whether you are at home or office, and whether you are using a laptop or a tablet, you would be able to use Spark. See more details about CloudxLab, here.
- Using Virtual MachinesSecond and the traditional way to experiment on Spark is to install a Virtual Machine. We will assist you in setting up Virtual Machine. However, most of our students are so happy with our CloudxLab that they hardly install a Virtual Machine.
Our classes are held every weekend on Saturdays and Sundays either in the mornings or in the evenings. So, there are two classes for 3 hours each; one on saturday and one on sunday.
In addition to the 6 hours of weekend classes, you will have to devote around 4-6 hours every week to complete assignments.
If you are not able to attend a particular class, you can watch the recordings of that class. Otherwise, you can attend the same class in another running batch.
Sometimes, due to various reasons, people find it difficult to continue a course. In case that happens, you can continue in another session in the future, or you can request your refund. Here are the guidelines for requesting the refund.
Yes, the course material is available to our students for life. You will have access to the content in LMS for ever.
Yes, we provide our own Certification. At the end of your course, you will work on a real time project. You will receive a Problem Statement along with a data-set to work on our CloudxLab. Once you are successfully through the project (Reviewed by an expert), you will be awarded a certificate with a performance-based grading.If your project is not approved in the first attempt, you can take extra assistance to understand concepts better and reattempt the project free of cost.
Big Data with Apache Spark is one of the hottest career options available today for software engineers. There are around thousands of jobs currently in U.S. alone for Data Analysts and the demand for Data Analysts is far more than the availability. Learn more about career prospects in Data Analysis at naukri.com and indeed.com.
Our cluster has all the softwares that are required for the course plus some more components such as GIT and R. In case you require a particular software to be installed on cluster which is not already there, please let us know.
|Self Paced Classes|
(20% off - Early Bird) (Incl. Taxes)
|Upcoming Live Courses|