Basic Steps for Designing Big Data Architecture

In my earlier post I talked about the basics of Big Data and how it can become a Future Nightmare, followed by Must Know Facts of Big Data. Today, let us talk about a very important and basic step for working with Big Data, i.e. “Big Data Architecture”.

Big data architecture is the logical and/or physical structure of how big data will be stored, accessed and managed within a big data or IT environment. It logically defines how big data solutions will work based on core components (hardware, database, software, storage) used, flow of information, security, and more. Big data architecture primarily serves as the key design reference for big data infrastructures and solutions.

Big Data Types:

Big data can be stored, acquired, processed, and analyzed in many ways. Every big data source has different characteristics, including frequency, volume, velocity, type, and veracity of data. When big data is processed and stored, additional dimensions come into play, such as governance, security, and policies.

Designing a Big Data architecture is already a complex task. Adding to that is the speed of technological innovations and competitive products in market, and this becomes quite a magnanimous task for any Big Data Architect.

Before designing big data reference architecture, the most vital step is identifying whether a particular business scenario is a Big Data Problem or not. These problems can be further categorized into types. Categorizing big data problems by type, make it easy to determine the individual characteristics of each data type. Big Data types can be categorized as follows:

  1. Machine-Generated Data
  2. Web and Social Data
  3. Transaction Data
  4. Human Generated
  5. Biometrics

Classification of Big Data Characteristics using Big Data Type

Data from different sources have different characteristics; for example, social media data can have video, images, and unstructured text such as blog posts. Once data is classified according to its characteristics, it can easily be matched with the appropriate big data pattern. Listed below are some of the common characteristics how data is assessed and categorized.

  • Analysis Type: Real time Analysis or Batched Analysis

Give careful consideration to choosing the analysis type, since it affects several other decisions about products, tools, hardware, data sources, and expected data frequency. A mix of both types may be required by the use case:

  1. Fraud Detection: Real-Time Analysis required
  2. Trend Analysis / Business Decisions: Batch Mode Analysis
  • Processing Methodology: Type of technique to be applied for processing data

Selected methodology helps in choosing the appropriate Tools and Techniques for Big Data Solution.

  • Data Frequency and Size: Amount of data and the speed at which it will be obtained.

This characteristic of data helps in deciding the storage mechanism, format and pre-processing tools. Size and Frequency vary for different data sources:

  1. On Demand – Social Media Data
  2. Continuous Feed / Real Time – Weather Data, Transactional Data
  3. Time Series – Time Based Data
  • Data Type: Type of Data to be processed.

Knowing the data type helps in segregation of data in the storage.

  • Content Format: Format of Incoming Data

Format tells us about how the incoming data needs to be processed and what tools and techniques should be used. Format could be Structured (RDBMS) or Un-Structured (Audio, Video, Images) or Semi-Structured.

  • Data Source: Sources of Data Generation

Identifying the Data Sources is vital in determining the scope from a business perspective. E.g. Web and Social Media, machine generated, human generated etc.

  • Data Consumers: List of possible consumers of processed data
  1. Business Processes
  2. Business Users
  3. Enterprise Applications
  4. Individual people in Various Business Roles
  5. Part of process flows
  6. Other data repositories or enterprise applications
  • Hardware: Hardware on which the Big Data Solution is to be implemented

Understanding the limitations of hardware helps inform the choice of Big Data Solution

6 Basic Steps of Big Data Architecture Designing:

Once we have analyzed the big data scenario of the company, characteristics of the Data and the type of Big Data Pattern, we can move to the planning of Big Data Reference Architecture. We could design the Reference architecture just by following the listed 6 Easy Steps:

planning-steps

  1. Analyze the Problem:

The task to be performed at this step is similar to what have been explained in the former sections. We need to analyze whether we need the Big Data Solution or not, characteristics of the Data and the type of Big Data Pattern.

  1. Vendor Selection:

This decision is solely made on the basis of what type of functionality we have to achieve through the tools. There are lot many vendors in the market with a very large range of tools for different tasks. It’s all up to the organization to decide what kind of tool they would like to opt for.

  1. Deployment Strategy:

It determines whether it will be on premise, cloud based or a mix of both.

  1. An on premise solution tends to be more secure, however the hardware maintenance would cost a lot more money, effort and time.
  2. A cloud based solution is more cost effective in terms scalability, procurement and maintenance.
  3. A mix deployment strategy gives us bit s of both worlds and data storing could be planned as per it’s use.
  1. Capacity Planning:

At this step we evaluate hardware and infrastructure sizing considering the below factors:

  1. Data Volume for One-Historical Load
  2. Daily data ingestion volume
  3. Retention period of Data
  4. Data Replication for critical Data
  5. Time period for which the cluster is sized, after which the cluster is scaled horizontally
  6. Multi Datacenter deployment
  1. Infrastructure Sizing:

The inferences from former step helps in infrastructure planning like type of hardware required. It also involves deciding the number of environments required. Important Factors to be considered:

  1. Types of processing Memory or I/O intensive
  2. Type of Disk
  3. No of disks per machine
  4. Memory Size HDD size
  5. No of CPU and cores
  6. Data retained and stored in each environment
  1. Backup and Disaster Recovery Sizing:

Backup and disaster recovery is a very important part of planning, and involves the following considerations:

  1. The criticality of data stored
  2. RPO (Recovery Point Objective) and RTO (Recovery Time Objective) requirements
  3. Active-Active or Active-Passive Disaster recovery
  4. Multi datacenter deployment
  5. Backup Interval (can be different for different types of data)

In my next post I will discuss about the different layer of architecture and functionalities of each one them. Till then let me know if I have left out something in planning steps through comments below.

One COMMENT

  • comment_avtar
    not_your_business
    switch off the annoying flashing banner at the top!

    4 years ago

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe Now & Never Miss The Latest Tech Updates!

Enter your e-mail address and click the Subscribe button to receive great content and coupon codes for amazing discounts.

Don't Miss Out. Complete the subscription Now.