Amazon Web Services or AWS is a secure cloud services platform, offering compute power, database storage, content delivery and other functionality to help businesses scale and grow.

It was launched in 2006 from the internal infrastructure that Amazon.com built to handle its online retail operations. AWS was one of the first companies to introduce a pay-as-you-go cloud computing model that scales to provide users with compute, storage or throughput as needed.

AWS offers many different tools and solutions for enterprises and software developers that can be used in data centers in up to 190 countries. Groups such as government agencies, education institutions, nonprofits and private organizations can use AWS services.


The various Services provided by AWS are the following :


  1. EC2 (Elastic Compute Cloud) — These are just the virtual machines in the cloud on which you have the OS level control. You can run whatever you want in them.
  2. LightSail — If you don’t have any prior experience with AWS this is for you. It automatically deploys and manages compute, storage and networking capabilities required to run your applications.
  3. ECS (Elastic Container Service) — It is a highly scalable container service to allows you to run Docker containers in the cloud.
  4. EKS (Elastic Container Service for Kubernetes) — Allows you to use Kubernetes on AWS without installing and managing your own Kubernetes control plane. It is a relatively new service.
  5. Lambda — AWS’s serverless technology that allows you to run functions in the cloud. It’s a huge cost saver as you pay only when your functions execute.
  6. Batch — It enables you to easily and efficiently run batch computing workloads of any scale on AWS using Amazon EC2 and EC2 spot fleet.
  7. Elastic Beanstalk — Allows automated deployment and provisioning of resources like a highly scalable production website.


  1. S3 (Simple Storage Service) — Storage service of AWS in which we can store objects like files, folders, images, documents, songs, etc. It cannot be used to install software, games or Operating System.
  2. EFS (Elastic File System) — Provides file storage for use with your EC2 instances. It uses NFSv4 protocol and can beused concurrently by thousands of instances.
  3. Glacier — It is an extremely low-cost archival service to store files for a long time like a few years or even decades.
  4. Storage Gateway — It is a virtual machine that you install on your on-premise servers. Your on-premise data can be backed up to AWS providing more durability.


  1. RDS (Relational Database Service) — Allows you to run relational databases like MySQL, MariaDB, PostgreSQL, Oracle or SQL Server. These databases are fully managed by AWS like installing antivirus and patches.
  2. DynamoDB — It is a highly scalable, high-performance NoSQL database. It provides single-digit millisecond latency at any scale.
  3. Elasticache — It is a way of caching data inside the cloud. It can be used to take load off of your database by caching most frequent queries.
  4. Neptune — It has been launched recently. It is a fast, reliable and scalable graph database service.
  5. RedShift — It is AWS’s data warehousing solution that can be used to run complex OLAP queries.


  1. DMS (Database Migration Service) — It can be used to migrate on-site databases to AWS. It also allows you to migrate from one type of database to another. Eg -from Oracle to MySQL.
  2. SMS (Server Migration Service) — It allows you to migrate on-site servers to AWS easily and quickly.
  3. Snowball — It is a briefcase sized appliance that can be used to send terabytes of data inside and outside of AWS.

Networking & Content Delivery

  1. VPC (Virtual Private Cloud) — It is simply a data center in the cloud in which you deploy all your resources. It allows you to better isolate your resources and secure them.
  2. CloudFront -It is AWS’s Content Delivery Network (CDN) that consists of Edge locations that cache resources.
  3. Route53 — It is AWS’s highly available DNS (Domain Name System) service. You can register domain names through it.
  4. Direct Connect — Using it you can connect your data center to an Availability zone using a high speed dedicated line.
  5. API Gateway — Allows you to create, store and manage APIs at scale.

Developer Tools

  1. CodeStar — It is a cloud-based service for creating, managing, and working with software development projects on AWS. You can quickly develop, build, and deploy applications on AWS with an AWS CodeStar project.
  2. CodeCommit — It is AWS’s version control service that allows you to store your code and other assets privately in the cloud.
  3. CodeBuild — It automates the process of building (compiling) your code.
  4. CodeDeploy — It is a way of deploying your code in EC2 instances automatically.
  5. CodePipeline — Allows you to keep track of different steps in your deployment like building, testing, authentication, and deployment on development and production environments.
  6. Cloud9 —It is an IDE (Integrated Development Environment) for writing, running, and debugging code in the cloud.
  7. X-Ray — It makes it easy for developers to analyze the behavior of their distributed applications by providing request tracing, exception collection, and profiling capabilities.

Management Tools

  1. CloudWatch — It can be used to monitor AWS environments like CPU utilization of EC2 and RDS instances and trigger alarms based on different metrics.
  2. CloudFormation — It is a way of turning infrastructure into the cloud. You can use templates to provision a whole production environment in minutes.
  3. CloudTrail — A way of auditing AWS resources. It logs all changes and API calls made to AWS.
  4. OpsWorks — It helps in automating Chef deployments on AWS.
  5. Config — It monitors your environment and notifies you when you break certain configurations.
  6. Service Catalog — For larger enterprises, helps to authorize which services will be used and which won’t be.
  7. Trusted Advisor — Gives you recommendations on how to do cost optimizations, and secure your environment.
  8. AWS Auto Scaling — Allows you to automatically scale your resources up and down based on CloudWatch metrics.
  9. Systems Manager — Allows you to group your resources, so you can quickly gain insights, identify issues and act on them.
  10. Managed Services—It provides ongoing management of your AWS infrastructure so you can focus on your applications.


  1. Athena — Allows you to run SQL queries on your S3 bucket to find files.
  2. EMR (Elastic Map Reduce) — It is used for big data processing like Hadoop, Apache Spark, and Splunk, etc.
  3. CloudSearch — It can be used to create a fully managed search engine for your website.
  4. ElasticSearch — It is similar to CloudSearch but gives you more features like application monitoring.
  5. Kinesis — A way of streaming and analyzing real-time data at massive scale. It can store TBs of data per hour.
  6. Data Pipeline — Allows you to move data from one place to another. Eg: from S3 to DynamoDB or vice versa.
  7. QuickSight —A business analytics tool that allows you to create visualizations in a rich dashboard for data in AWS. Eg: for S3, DynamoDB, etc.
  8. Glue — It is a fully managed ETL (extract, transform, and load) service that makes it simple and cost-effective to categorize your data, clean it, enrich it, and move it reliably between various data stores.

Security, Identity, and Compliance

  1. IAM (Identity and Access Management) — Allows you to manage users, assign policies, create groups to manage multiple users.
  2. Inspector — It is an agent that you install on our virtual machines, which then reports any security vulnerabilities.
  3. Certificate Manager — It gives free SSL certificates for your domains that are managed by Route53.
  4. Directory Service — A way of using your company’s account to log in to AWS.
  5. WAF (Web Application Firewall) — Gives you application-level protection and blocks SQL injection and cross-site scripting attacks.
  6. CloudHSM — It helps you meet corporate, contractual, and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS Cloud.
  7. Cloud Directory — It enables you to build flexible, cloud-native directories for organizing hierarchies of data along multiple dimensions.
  8. KMS (Key Management Service) — It is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data.
  9. Organizations — It allows you to create groups of AWS accounts that you can use to more easily manage security and automation settings.
  10. Shield — A managed DDoS (Distributed Denial of Service) protection service that safeguards web applications running on AWS.
  11. Artifact — It is the place where you can get all your compliance certifications.
  12. Macie — A data visibility security service that helps classify and protect your sensitive and business-critical content.
  13. GuardDuty —Provides intelligent threat detection to protect your AWS accounts and workloads


Application Services

  1. Step Functions — A way of visualizing what’s going inside your application and what different microservices it is using.
  2. SWF (Simple Workflow Service) — A way of coordinating both automated tasks and human-led tasks.
  3. SNS (Simple Notification Service) — Can be used to send you notifications in the form of email and SMS regarding your AWS services. It is a push-based service.
  4. SQS (Simple Queue Service) — The first service offered by AWS. It can be used to decouple your applications. It is a pull-based service.
  5. Elastic Transcoder — Changes a video’s format and resolution to support different devices like tablets, smartphones, and laptops of different resolutions.

Mobile Services

  1. Mobile Hub — Allows you to add, configure and design features for mobile apps. It is a console for mobile app development.
  2. Cognito — Allows your users to signup using social identity providers.
  3. Device Farm — Enables you to improve quality of apps by quickly testing on hundreds of mobile devices.
  4. AWS AppSync —It is an enterprise level, fully managed GraphQL service with real-time data synchronization and offline programming features.
  5. Mobile Analytics — Allows to simply and cost effectively analyze mobile data.

Business Productivity

  1. Alexa for Business — It lets you empower your organization with voice, using Alexa. Allows you to build custom voice skills for your organization.
  2. Chime — Can be used for online meeting and video conferencing.
  3. WorkDocs — Helps to store documents in the cloud
  4. WorkMail — Allows you to send and receive business emails.

Desktop & App Streaming

  1. WorkSpaces — It is a VDI (Virtual Desktop Infrastructure). Allows you to use remote desktops in the cloud
  2. AppStream 2.0 — A way of streaming desktop applications to your users in the web browser. Eg: Using MS Word in Google Chrome.

Artificial Intelligence

  1. Lex — Allows you to quickly build chatbots.
  2. Polly — AWS’s text-to-speech service. You can create audio versions of your notes using it.
  3. Machine learning — You just have to give your dataset and target variable and AWS will take care of training your model.
  4. Rekognition — AWS’s face recognition service. Allows you to recognize faces and object in images and videos.
  5. SageMaker — Helps you to build, train and deploy machine learning models at any scale.
  6. Comprehend — It is a Natural Language Processing (NLP) service that uses machine learning to find insights and relationships in text. It can be used for sentiment analysis.
  7. Transcribe — It is the opposite of Polly. It is AWS’s speech-to-text service that provides that provides high-quality and affordable transcriptions.
  8. Translate — It is like Google Translate and allows you to translate text in one language to another.

AR & VR (Augmented Reality & Virtual Reality)

  1. Sumerian — It is a set of tools for creating high-quality virtual reality (VR) experiences on the web. You can quickly create interactive 3D scenes and publish it as a website for users to access.

Customer Engagement

  1. Amazon Connect — Allows you to create a customer care center in the cloud.
  2. Pinpoint — It is like Google analytics for mobile applications. It helps you to understand users and engage with them.
  3. SES (Simple Email Service) — Allows you to send bulk emails to your customers at an extremely low price.

Game Development

  1. GameLift — It is a service managed by AWS that can used to host dedicated game servers. It seamlessly scales without taking your game offline.

Internet of Things

  1. IoT Core— It is a managed cloud platform that lets connected devices — cars, light bulbs, sensor grids, and more — easily and securely interact with cloud applications and other devices.
  2. IoT Device Management — Allows you to manage your IoT devices at any scale.
  3. IoT Analytics — Can be used to perform analysis on data collected by your IoT devices.
  4. Greengrass — Lets your IoT devices to process the locally generated data while advantage of AWS services.
  5. Amazon FreeRTOS — It is a real-time operating system for microcontrollers that makes it easy to securely connect IoT devices locally or to the cloud.

Want to Master the skillset of a cloud solutions architect ! Join Networkcrafts today to pass your AWS Global Cloud Architect Certifications with ease.




A complete storage infrastructure, including all the hardware and software involved in providing shared access to central storage devices from multiple servers. This usage, although not strictly correct, is commonly accepted and what most people refers to when talking about a “SAN”.

A single storage array (see later); as in, “we have a Brand X SAN with 20 TB storage”. This usage is fundamentally incorrect, because it doesn’t even take into account the real meaning of “SAN” and just assumes it’s some form of storage device.

A SAN can be composed of very different hardware, but can usually be broken down into various components:

Storage Arrays: this is where data is actually stored (and what is erroneously called a “SAN” quite often). They are composed of:

Physical Disks: they, of course, archive the data. Enterprise-level disks are used, which means they usually have lower per-disk capacity, but much higher performance and reliability; also, they are a lot more expensive than consumer-class disks. The disks can use a wide range of connections and protocols (SATA, SAS, FC, etc.) and different storage media (Solid-State Disks are becoming increasingly common), depending on the specific SAN implementation.

Disk Enclosures: this is where the disks are placed. They provide electricity and data connections to them.
Storage Controllers/Processors: these manage disk I/O, RAID and caching (the term “controller” or “processor” varies between SAN vendors). Again, enterprise-level controllers are used, so they have much better performance and reliability than consumer-class hardware. They can, and usually are, configured in pair for redundancy.
Storage Pools: a storage pool is a bunch of storage space, comprising some (often many) disks in a RAID configuration. It is called a “pool” because sections of it can be allocated, resized and de-allocated on demand, creating LUNs.

Logical Unit Numbers (LUNs): a LUN is chunk of space drawn from a storage pool, which is then made available (“presented”) to one or more servers. This is seen by the servers as a storage volume, and can be formatted by them using any file system they prefer.

Tape Libraries: they can be connected to a SAN and use the same communications technology both for connecting to servers and for direct storage-to-tape backups.

Communications Network (the “SAN” proper): this is what allows the storage users (servers) to access the storage devices (storage array(s), tape libraries, etc.); it is, strictly speaking, the real meaning of the term “Storage Area Network”, and the only part of a storage infrastructure that should be defined as such. There really are lots of solutions to connect servers to shared storage devices, but the most common ones are:

Fibre Channel: a technology which uses fiber-optics for high-speed connections to shared storage. It includes host bus adapters, fiber-optic cables and FC switches, and can achieve transfer speeds ranging from 1 Gbit to 20 Gbit. Also, multipath I/O can be used to group several physical links together, allowing for higher bandwidth and fault tolerance.

iSCSI: an implementation of the SCSI protocol over IP transport. It runs over standard Ethernet hardware, which means it can achieve transfer speeds from 100 Mbit (generally not used for SANs) to 100 Gbit. Multipath I/O can also be used (although the underlying networking layer introduces some additional complexities).
Fibre Channel over Ethernet (FCoE): a technology in-between full FC and iSCSI, which uses Ethernet as the physical layer but FC as the transport protocol, thus avoiding the need for an IP layer in the middle.

InfiniBand: a very high-performance connectivity technology, less used and quite expensive, but which can achieve some impressive bandwidth.

Host Bus Adapters (HBAs): the adapter cards used by the servers to access the connectivity layer; they can be dedicated adapters (as in FC SANs) or standard Ethernet cards. There are also iSCSI HBAs, which have a standard Ethernet connection, but can handle the iSCSI protocol in hardware, thus relieving the server of some additional load.

A SAN provides many additional capabilities over direct-attached (or physically shared) storage:

Fault tolerance: high availability is built-in in any enterprise-level SAN, and is handled at all levels, from power supplies in storage arrays to server connections. Disks are more reliable, RAID is used to withstand single-disk (or multiple-disk) failures, redundant controllers are employed, and multipath I/O allows for uninterrupted storage access even in the case of a link failure.

Greater storage capacity: SANs can contain many large storage devices, allowing for much greater storage spaces than what a single server could achieve.

Dynamic storage management: storage volumes (LUNs) can be created, resized and destroyed on demand; they can be moved from one server to another; allocating additional storage to a server requires only some configurations, as opposed to buying disks and installing them.

Performance: a properly-configured SAN, using recent (although expensive) technologies, can achieve really impressive performance, and is designed from the ground up to handle heavy concurrent load from multiple servers.
Storage-level replication: two (or more) storage arrays can be configured for synchronous replication, allowing for the complete redirection of server I/O from one to another in fault or disaster scenarios.

Storage-level snapshots: most storage arrays allow for taking snapshots of single volumes and/or whole storage pools. Those snapshots can then be restored if needed.

Storage-level backups: most SANs also allow for performing backups directly from storage arrays to SAN-connected tape libraries, completely bypassing the servers which actually use the data; various techniques are employed to ensure data integrity and consistency.

Based on everything above, the benefits of using SANs are obvious; but what about the costs of buying one, and the complexity of managing one?

SANs are enterprise-grade hardware (although there can be a business case for small SANs even in small/medium companies); they are of course highly customizable, so can range from “a couple TBs with 1 Gbit iSCSI and somewhat high reliability” to “several hundred TBs with amazing speed, performance and reliability and full synchronous replication to a DR data center”; costs vary accordingly, but are generally higher (as in “total cost”, as well as in “cost per gigabyte of space”) than other solutions. There is no pricing standard, but it’s not uncommon for even small SANs to have price tags in the tens-of-thousands (and even hundreds-of-thousands) dollars range.

Designing and implementing a SAN (even more so for a high-end one) requires specific skills, and this kind of job is usually done by highly-specialized people. Day-to-day operations, such as managing LUNs, are considerably easier, but in many companies storage management is anyway handled by a dedicated person or team.


A short note on Hybrid Data Center

To address growing data center demands and provide the added benefits of agility, scalability and global reach, the traditional data center is transforming into what is commonly referred to as a hybrid data center.

A hybrid cloud combines your existing data center (private cloud) resources, over which you have complete control, with ready-made IT infrastructure resources (e.g., compute, networking, storage, applications and services) that provide bursting and scaling capabilities found in IaaS (infrastructure as a service) or public cloud offerings, such as Amazon Web Services (AWS).

Here are three key benefits of using a hybrid cloud approach:

Benefit 1: Start Small and Expand as Needed

A hybrid cloud approach enables you to license IT infrastructure resources on a project-by-project basis with the ability to add more as needed. Without the public cloud, you would potentially invest in hardware that would sit idly during off-peak times and only be used for short-term projects.

A hybrid cloud also lets you take advantage of component-based development methodologies. If you use AWS for building new applications, architects and coders can leverage development techniques that are more component-based than previously used techniques. You can easily separate development, testing and production environments for new applications. Environments can be cloned or replicated, spooled up, and used as needed with seamless traffic flow and strong security policy enforcement.

Benefit 2: Expand Your Data Center Seamlessly and Transparently

With a hybrid strategy, your public cloud essentially functions as an extension of your data center via an IPsec VPN connection, allowing you to safely and securely deploy workloads in either location. The IPsec VPN connection acts as an overlay network, bringing added benefits of privacy and simplicity from the reduction in the number of Layer 3 hops across the end-to-end network.

This allows you to transparently expand your internal IP address space into the public cloud using widely supported routing protocols. With an overlay network, there’s nothing new or challenging to your network operations team or security specialists, and security policies can be easily extended to cover the routes.

Benefit 3: Security Policy Consistency – From the Network to the Cloud

Your business relies on the consistent, reliable operation of applications and data whether on-premise or in the cloud. To ensure your applications and data are protected from cyber adversaries, best practices dictate that your policies be consistent and, ideally, managed centrally.

By centrally managing your on-premise and public cloud security policies, you are able to perform logical groupings of like rules, security objects and so on. This creates many opportunities for improved efficiency using a single pane of glass for all your firewalls, public and private.

For example, many configuration elements universal to all firewalls in your organization can be configured once and shared with all firewalls, including such elements as DNS servers, NTP servers, local admin accounts and syslog servers.


The Internet of Things (IoT), also sometimes referred to as the Internet of Everything (IoE), consists of all the web-enabled devices that collect, send and act on data they acquire from their surrounding environments using embedded sensors, processors and communication hardware.

An IoT ecosystem consists of web-enabled smart devices that use embedded processors, sensors and communication hardware to collect, send and act on data they acquire from their environments. IoT devices share the sensor data they collect by connecting to an IoT gateway or other edge device where data is either sent to the cloud to be analyzed or analyzed locally. Sometimes, these devices communicate with other related devices and act on the information they get from one another. The devices do most of the work without human intervention, although people can interact with the devices — for instance, to set them up, give them instructions or access the data.

The connectivity, networking and communication protocols used with these web-enabled devices largely depend on the specific IoT applications deployed.

Devices and objects with built in sensors are connected to an Internet of Things platform, which integrates data from the different devices and applies analytics to share the most valuable information with applications built to address specific needs.

These powerful IoT platforms can pinpoint exactly what information is useful and what can safely be ignored. This information can be used to detect patterns, make recommendations, and detect possible problems before they occur.


Internet of Things can connect devices embedded in various systems to the internet. When devices/objects can represent themselves digitally, they can be controlled from anywhere. The connectivity then helps us capture more data from more places, ensuring more ways of increasing efficiency and improving safety and IoT security.

IoT is a transformational force that can help companies improve performance through IoT analytics and IoT Security to deliver better results. Businesses in the utilities, oil & gas, insurance, manufacturing, transportation, infrastructure and retail sectors can reap the benefits of IoT by making more informed decisions, aided by the torrent of interactional and transactional data at their disposal.

Now the question comes to everyone’s mind is to Why IoT is required and how does it help organizations ?

An article by Ashton published in the RFID Journal in 1999 said, “If we had computers that knew everything there was to know about things – using data they gathered without any help from us – we would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best. We need to empower computers with their own means of gathering information, so they can see, hear and smell the world for themselves, in all its random glory.” This is precisely what IoT platforms does for us. It enables devices/objects to observe, identify and understand a situation or the surroundings without being dependent on human help.

IoT platforms can help organizations reduce cost through improved process efficiency, asset utilization and productivity. With improved tracking of devices/objects using sensors and connectivity, they can benefit from real-time insights and analytics, which would help them make smarter decisions. The growth and convergence of data, processes and things on the internet would make such connections more relevant and important, creating more opportunities for people, businesses and industries.


Data in SAN (Storage Area Network) can be stored using two basic modules as DAS (Direct Attached Storage) and NAS (Network Attached Storage).

DAS is so named because it is a directly attached to a server without any intermediary network involved. Main distinguishing character of DAS (Direct Attached Storage) is its direct connectivity through a host bus adapter without the use of the networking devices as hubs, bridges and switches.

In a network attached storage system, many independent clients can access the storage memory. The aim of the NAS (Network Attached Storage) is to provide only file based storage devices. DAS (Direct Attached Storage) can also provide multiple and parallel access if we equip this network with multiple ports.

In the same way we can convert NAS (Network Attached Storage) to a DAS (Direct Attached Storage) by disconnecting the entire network and attach the port to a single computer. DAS (Direct Attached Storage) can be termed as an inefficient network because it cannot share its idle resources with other units in the network.

DAS (Direct Attached Storage) and SAN (Storage Area Network) can overcome this deficiency but both these networks are costly and difficult to handle. Main DAS (Direct Attached Storage) protocols are SATA, SAS and fiber channel.


The practical benefit of having a San is that it allows the sharing of storage devices, and releases the user from the hassle of carrying physical cables and other storage devices. Servers can boot from SAN (Storage Area Network) on their own. This allows the replacement server to make use of the logical number unit of the defective layer.

SAN (Storage Area Network) was initially really expensive that even major multinationals were afraid of employing it. Nowadays the cost of SAN (Storage Area Network) has dropped to a level that even some small organizations are using direct attached storage. It has become easier to establish a SAN (Storage Area Network) than to buy servers which have plenty of disk space. SAN (Storage Area Network) increases the disk utilization.

When you develop a single storage network, you get the ease of handling files as single unit. This provides you the ease of dividing the central storage pool s to reside at the network level and enables you to allocate storage more conveniently and intelligently to the other servers that want it.

The initial cost of implementing any SAN (Storage Area Network) disaster recovery system can be huge, but its higher efficiency and quick results would cover the cost in no time.San uses very protective disk applications in order to make sure that that the data does not corrupt .the best part of employing SAN (Storage Area Network) is that you do not need to worry about upgrading it at regular intervals. One just has to apply the SAN (Storage Area Network) system and forget about it would do the wonders itself. SAN offers centralized storage network , and its speed and efficiency is much more than the bundle of individual servers put together to make storage network.


Storage Area Networks or SAN  are frequently deployed in support of business-critical, performance-sensitive applications such as:

Oracle databases. These are frequently business-critical and require the highest performance and availability.
Microsoft SQL Server databases. Like Oracle databases, MS SQL Server databases commonly store an enterprise’s most valuable data, so they require the highest performance and availability.

Large virtualization deployments using VMware, KVM, or Microsoft Hyper-V. These environments often extend to thousands of virtual machines running a broad range of operating systems and applications, with different performance requirements. Virtualized environments concentrate many applications, so infrastructure reliability becomes even more important because a failure can cause multiple application outages.

Large virtual desktop infrastructures (VDIs). These environments serve virtual desktops to large numbers of an organization’s users. Some VDI environments can easily number in the tens of thousands of virtual desktops. By centralizing the virtual desktops, organizations can more easily manage data protection and data security.
SAP or other large ERP or CRM environments. SAN architectures are ideal for enterprise resource planning and customer resource management workloads.

Fibre Channel (FC) SANs have the reputation of being expensive, complex and difficult to manage. Ethernet-based iSCSI has reduced these challenges by encapsulating SCSI commands into IP packets that don’t require an FC connection.

The emergence of iSCSI means that instead of learning, building and managing two networks — an Ethernet local area network (LAN) for user communication and an FC SAN for storage — an organization can use its existing knowledge and infrastructure for both LANs and SANs. This is an especially useful approach in small and midsize businesses that may not have the funds or expertise to support a Fibre Channel SAN.

Organizations use SANs for distributed applications that need fast local network performance. SANs improve the availability of applications through multiple data paths. They can also improve application performance because they enable IT administrators to offload storage functions and segregate networks.

Additionally, SANs help increase the effectiveness and use of storage because they enable administrators to consolidate resources and deliver tiered storage. SANs also improve data protection and security. Finally, SANs can span multiple sites, which helps companies with their business continuity strategies


IOT platforms can help organizations reduce cost through improved process efficiency, asset utilization and productivity. With improved tracking of devices/objects using sensors and connectivity, they can benefit from real-time insights and analytics, which would help them make smarter decisions. The growth and convergence of data, processes and things on the internet would make such connections more relevant and important, creating more opportunities for people, businesses and industries.

The internet of things offers a number of benefits to organizations, enabling them to:

  • Monitor their overall business processes;
  • Improve the customer experience;
  • Save time and money;
  • Enhance employee productivity;
  • Integrate and adapt business models;
  • Make better business decisions; and Generate more revenue.

IoT encourages companies to rethink the ways they approach their businesses, industries and markets and gives them the tools to improve their business strategies.



The Internet of Things is a term first coined by British entrepreneur Kevin Ashton in 1999. It covers everything that is technologically smart and able to communicate with other devices, networks, systems, and things. … Consumer IoT incorporates the above and is essentially the IoT product aimed at the consumer space.

It is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.”May 30, 2017. The Internet of Things (IoT) refers to the use of intelligently connected devices and systems to leverage data gathered by embedded sensors and actuators in machines and other physical objects. In other words, the IoT (Internet of Things) can be called to any of the physical objects connected with network.