A complete storage infrastructure, including all the hardware and software involved in providing shared access to central storage devices from multiple servers. This usage, although not strictly correct, is commonly accepted and what most people refers to when talking about a “SAN”.

A single storage array (see later); as in, “we have a Brand X SAN with 20 TB storage”. This usage is fundamentally incorrect, because it doesn’t even take into account the real meaning of “SAN” and just assumes it’s some form of storage device.

A SAN can be composed of very different hardware, but can usually be broken down into various components:

Storage Arrays: this is where data is actually stored (and what is erroneously called a “SAN” quite often). They are composed of:

Physical Disks: they, of course, archive the data. Enterprise-level disks are used, which means they usually have lower per-disk capacity, but much higher performance and reliability; also, they are a lot more expensive than consumer-class disks. The disks can use a wide range of connections and protocols (SATA, SAS, FC, etc.) and different storage media (Solid-State Disks are becoming increasingly common), depending on the specific SAN implementation.

Disk Enclosures: this is where the disks are placed. They provide electricity and data connections to them.
Storage Controllers/Processors: these manage disk I/O, RAID and caching (the term “controller” or “processor” varies between SAN vendors). Again, enterprise-level controllers are used, so they have much better performance and reliability than consumer-class hardware. They can, and usually are, configured in pair for redundancy.
Storage Pools: a storage pool is a bunch of storage space, comprising some (often many) disks in a RAID configuration. It is called a “pool” because sections of it can be allocated, resized and de-allocated on demand, creating LUNs.

Logical Unit Numbers (LUNs): a LUN is chunk of space drawn from a storage pool, which is then made available (“presented”) to one or more servers. This is seen by the servers as a storage volume, and can be formatted by them using any file system they prefer.

Tape Libraries: they can be connected to a SAN and use the same communications technology both for connecting to servers and for direct storage-to-tape backups.

Communications Network (the “SAN” proper): this is what allows the storage users (servers) to access the storage devices (storage array(s), tape libraries, etc.); it is, strictly speaking, the real meaning of the term “Storage Area Network”, and the only part of a storage infrastructure that should be defined as such. There really are lots of solutions to connect servers to shared storage devices, but the most common ones are:

Fibre Channel: a technology which uses fiber-optics for high-speed connections to shared storage. It includes host bus adapters, fiber-optic cables and FC switches, and can achieve transfer speeds ranging from 1 Gbit to 20 Gbit. Also, multipath I/O can be used to group several physical links together, allowing for higher bandwidth and fault tolerance.

iSCSI: an implementation of the SCSI protocol over IP transport. It runs over standard Ethernet hardware, which means it can achieve transfer speeds from 100 Mbit (generally not used for SANs) to 100 Gbit. Multipath I/O can also be used (although the underlying networking layer introduces some additional complexities).
Fibre Channel over Ethernet (FCoE): a technology in-between full FC and iSCSI, which uses Ethernet as the physical layer but FC as the transport protocol, thus avoiding the need for an IP layer in the middle.

InfiniBand: a very high-performance connectivity technology, less used and quite expensive, but which can achieve some impressive bandwidth.

Host Bus Adapters (HBAs): the adapter cards used by the servers to access the connectivity layer; they can be dedicated adapters (as in FC SANs) or standard Ethernet cards. There are also iSCSI HBAs, which have a standard Ethernet connection, but can handle the iSCSI protocol in hardware, thus relieving the server of some additional load.

A SAN provides many additional capabilities over direct-attached (or physically shared) storage:

Fault tolerance: high availability is built-in in any enterprise-level SAN, and is handled at all levels, from power supplies in storage arrays to server connections. Disks are more reliable, RAID is used to withstand single-disk (or multiple-disk) failures, redundant controllers are employed, and multipath I/O allows for uninterrupted storage access even in the case of a link failure.

Greater storage capacity: SANs can contain many large storage devices, allowing for much greater storage spaces than what a single server could achieve.

Dynamic storage management: storage volumes (LUNs) can be created, resized and destroyed on demand; they can be moved from one server to another; allocating additional storage to a server requires only some configurations, as opposed to buying disks and installing them.

Performance: a properly-configured SAN, using recent (although expensive) technologies, can achieve really impressive performance, and is designed from the ground up to handle heavy concurrent load from multiple servers.
Storage-level replication: two (or more) storage arrays can be configured for synchronous replication, allowing for the complete redirection of server I/O from one to another in fault or disaster scenarios.

Storage-level snapshots: most storage arrays allow for taking snapshots of single volumes and/or whole storage pools. Those snapshots can then be restored if needed.

Storage-level backups: most SANs also allow for performing backups directly from storage arrays to SAN-connected tape libraries, completely bypassing the servers which actually use the data; various techniques are employed to ensure data integrity and consistency.

Based on everything above, the benefits of using SANs are obvious; but what about the costs of buying one, and the complexity of managing one?

SANs are enterprise-grade hardware (although there can be a business case for small SANs even in small/medium companies); they are of course highly customizable, so can range from “a couple TBs with 1 Gbit iSCSI and somewhat high reliability” to “several hundred TBs with amazing speed, performance and reliability and full synchronous replication to a DR data center”; costs vary accordingly, but are generally higher (as in “total cost”, as well as in “cost per gigabyte of space”) than other solutions. There is no pricing standard, but it’s not uncommon for even small SANs to have price tags in the tens-of-thousands (and even hundreds-of-thousands) dollars range.

Designing and implementing a SAN (even more so for a high-end one) requires specific skills, and this kind of job is usually done by highly-specialized people. Day-to-day operations, such as managing LUNs, are considerably easier, but in many companies storage management is anyway handled by a dedicated person or team.

A short note on Hybrid Data Center

To address growing data center demands and provide the added benefits of agility, scalability and global reach, the traditional data center is transforming into what is commonly referred to as a hybrid data center.

A hybrid cloud combines your existing data center (private cloud) resources, over which you have complete control, with ready-made IT infrastructure resources (e.g., compute, networking, storage, applications and services) that provide bursting and scaling capabilities found in IaaS (infrastructure as a service) or public cloud offerings, such as Amazon Web Services (AWS).

Here are three key benefits of using a hybrid cloud approach:

Benefit 1: Start Small and Expand as Needed

A hybrid cloud approach enables you to license IT infrastructure resources on a project-by-project basis with the ability to add more as needed. Without the public cloud, you would potentially invest in hardware that would sit idly during off-peak times and only be used for short-term projects.

A hybrid cloud also lets you take advantage of component-based development methodologies. If you use AWS for building new applications, architects and coders can leverage development techniques that are more component-based than previously used techniques. You can easily separate development, testing and production environments for new applications. Environments can be cloned or replicated, spooled up, and used as needed with seamless traffic flow and strong security policy enforcement.

Benefit 2: Expand Your Data Center Seamlessly and Transparently

With a hybrid strategy, your public cloud essentially functions as an extension of your data center via an IPsec VPN connection, allowing you to safely and securely deploy workloads in either location. The IPsec VPN connection acts as an overlay network, bringing added benefits of privacy and simplicity from the reduction in the number of Layer 3 hops across the end-to-end network.

This allows you to transparently expand your internal IP address space into the public cloud using widely supported routing protocols. With an overlay network, there’s nothing new or challenging to your network operations team or security specialists, and security policies can be easily extended to cover the routes.

Benefit 3: Security Policy Consistency – From the Network to the Cloud

Your business relies on the consistent, reliable operation of applications and data whether on-premise or in the cloud. To ensure your applications and data are protected from cyber adversaries, best practices dictate that your policies be consistent and, ideally, managed centrally.

By centrally managing your on-premise and public cloud security policies, you are able to perform logical groupings of like rules, security objects and so on. This creates many opportunities for improved efficiency using a single pane of glass for all your firewalls, public and private.

For example, many configuration elements universal to all firewalls in your organization can be configured once and shared with all firewalls, including such elements as DNS servers, NTP servers, local admin accounts and syslog servers.


The Internet of Things (IoT), also sometimes referred to as the Internet of Everything (IoE), consists of all the web-enabled devices that collect, send and act on data they acquire from their surrounding environments using embedded sensors, processors and communication hardware.

An IoT ecosystem consists of web-enabled smart devices that use embedded processors, sensors and communication hardware to collect, send and act on data they acquire from their environments. IoT devices share the sensor data they collect by connecting to an IoT gateway or other edge device where data is either sent to the cloud to be analyzed or analyzed locally. Sometimes, these devices communicate with other related devices and act on the information they get from one another. The devices do most of the work without human intervention, although people can interact with the devices — for instance, to set them up, give them instructions or access the data.

The connectivity, networking and communication protocols used with these web-enabled devices largely depend on the specific IoT applications deployed.

Devices and objects with built in sensors are connected to an Internet of Things platform, which integrates data from the different devices and applies analytics to share the most valuable information with applications built to address specific needs.

These powerful IoT platforms can pinpoint exactly what information is useful and what can safely be ignored. This information can be used to detect patterns, make recommendations, and detect possible problems before they occur.


Internet of Things can connect devices embedded in various systems to the internet. When devices/objects can represent themselves digitally, they can be controlled from anywhere. The connectivity then helps us capture more data from more places, ensuring more ways of increasing efficiency and improving safety and IoT security.

IoT is a transformational force that can help companies improve performance through IoT analytics and IoT Security to deliver better results. Businesses in the utilities, oil & gas, insurance, manufacturing, transportation, infrastructure and retail sectors can reap the benefits of IoT by making more informed decisions, aided by the torrent of interactional and transactional data at their disposal.

Now the question comes to everyone’s mind is to Why IoT is required and how does it help organizations ?

An article by Ashton published in the RFID Journal in 1999 said, “If we had computers that knew everything there was to know about things – using data they gathered without any help from us – we would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best. We need to empower computers with their own means of gathering information, so they can see, hear and smell the world for themselves, in all its random glory.” This is precisely what IoT platforms does for us. It enables devices/objects to observe, identify and understand a situation or the surroundings without being dependent on human help.

IoT platforms can help organizations reduce cost through improved process efficiency, asset utilization and productivity. With improved tracking of devices/objects using sensors and connectivity, they can benefit from real-time insights and analytics, which would help them make smarter decisions. The growth and convergence of data, processes and things on the internet would make such connections more relevant and important, creating more opportunities for people, businesses and industries.


Data in SAN (Storage Area Network) can be stored using two basic modules as DAS (Direct Attached Storage) and NAS (Network Attached Storage).

DAS is so named because it is a directly attached to a server without any intermediary network involved. Main distinguishing character of DAS (Direct Attached Storage) is its direct connectivity through a host bus adapter without the use of the networking devices as hubs, bridges and switches.

In a network attached storage system, many independent clients can access the storage memory. The aim of the NAS (Network Attached Storage) is to provide only file based storage devices. DAS (Direct Attached Storage) can also provide multiple and parallel access if we equip this network with multiple ports.

In the same way we can convert NAS (Network Attached Storage) to a DAS (Direct Attached Storage) by disconnecting the entire network and attach the port to a single computer. DAS (Direct Attached Storage) can be termed as an inefficient network because it cannot share its idle resources with other units in the network.

DAS (Direct Attached Storage) and SAN (Storage Area Network) can overcome this deficiency but both these networks are costly and difficult to handle. Main DAS (Direct Attached Storage) protocols are SATA, SAS and fiber channel.


The practical benefit of having a San is that it allows the sharing of storage devices, and releases the user from the hassle of carrying physical cables and other storage devices. Servers can boot from SAN (Storage Area Network) on their own. This allows the replacement server to make use of the logical number unit of the defective layer.

SAN (Storage Area Network) was initially really expensive that even major multinationals were afraid of employing it. Nowadays the cost of SAN (Storage Area Network) has dropped to a level that even some small organizations are using direct attached storage. It has become easier to establish a SAN (Storage Area Network) than to buy servers which have plenty of disk space. SAN (Storage Area Network) increases the disk utilization.

When you develop a single storage network, you get the ease of handling files as single unit. This provides you the ease of dividing the central storage pool s to reside at the network level and enables you to allocate storage more conveniently and intelligently to the other servers that want it.

The initial cost of implementing any SAN (Storage Area Network) disaster recovery system can be huge, but its higher efficiency and quick results would cover the cost in no time.San uses very protective disk applications in order to make sure that that the data does not corrupt .the best part of employing SAN (Storage Area Network) is that you do not need to worry about upgrading it at regular intervals. One just has to apply the SAN (Storage Area Network) system and forget about it would do the wonders itself. SAN offers centralized storage network , and its speed and efficiency is much more than the bundle of individual servers put together to make storage network.


Storage Area Networks or SAN  are frequently deployed in support of business-critical, performance-sensitive applications such as:

Oracle databases. These are frequently business-critical and require the highest performance and availability.
Microsoft SQL Server databases. Like Oracle databases, MS SQL Server databases commonly store an enterprise’s most valuable data, so they require the highest performance and availability.

Large virtualization deployments using VMware, KVM, or Microsoft Hyper-V. These environments often extend to thousands of virtual machines running a broad range of operating systems and applications, with different performance requirements. Virtualized environments concentrate many applications, so infrastructure reliability becomes even more important because a failure can cause multiple application outages.

Large virtual desktop infrastructures (VDIs). These environments serve virtual desktops to large numbers of an organization’s users. Some VDI environments can easily number in the tens of thousands of virtual desktops. By centralizing the virtual desktops, organizations can more easily manage data protection and data security.
SAP or other large ERP or CRM environments. SAN architectures are ideal for enterprise resource planning and customer resource management workloads.

Fibre Channel (FC) SANs have the reputation of being expensive, complex and difficult to manage. Ethernet-based iSCSI has reduced these challenges by encapsulating SCSI commands into IP packets that don’t require an FC connection.

The emergence of iSCSI means that instead of learning, building and managing two networks — an Ethernet local area network (LAN) for user communication and an FC SAN for storage — an organization can use its existing knowledge and infrastructure for both LANs and SANs. This is an especially useful approach in small and midsize businesses that may not have the funds or expertise to support a Fibre Channel SAN.

Organizations use SANs for distributed applications that need fast local network performance. SANs improve the availability of applications through multiple data paths. They can also improve application performance because they enable IT administrators to offload storage functions and segregate networks.

Additionally, SANs help increase the effectiveness and use of storage because they enable administrators to consolidate resources and deliver tiered storage. SANs also improve data protection and security. Finally, SANs can span multiple sites, which helps companies with their business continuity strategies


IOT platforms can help organizations reduce cost through improved process efficiency, asset utilization and productivity. With improved tracking of devices/objects using sensors and connectivity, they can benefit from real-time insights and analytics, which would help them make smarter decisions. The growth and convergence of data, processes and things on the internet would make such connections more relevant and important, creating more opportunities for people, businesses and industries.

The internet of things offers a number of benefits to organizations, enabling them to:

  • Monitor their overall business processes;
  • Improve the customer experience;
  • Save time and money;
  • Enhance employee productivity;
  • Integrate and adapt business models;
  • Make better business decisions; and Generate more revenue.

IoT encourages companies to rethink the ways they approach their businesses, industries and markets and gives them the tools to improve their business strategies.



The Internet of Things is a term first coined by British entrepreneur Kevin Ashton in 1999. It covers everything that is technologically smart and able to communicate with other devices, networks, systems, and things. … Consumer IoT incorporates the above and is essentially the IoT product aimed at the consumer space.

It is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.”May 30, 2017. The Internet of Things (IoT) refers to the use of intelligently connected devices and systems to leverage data gathered by embedded sensors and actuators in machines and other physical objects. In other words, the IoT (Internet of Things) can be called to any of the physical objects connected with network.




Cloud Computing has different definitions which can be understood with the following points

1) Cloud computing is a method for delivering information technology (IT) services in which resources are retrieved from the Internet through web-based tools and applications, as opposed to a direct connection to a server.

2) Cloud computing is shared pools of configurable computer system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility.
Rather than keeping files on a proprietary hard drive or local storage device, cloud-based storage makes it possible to save them to a remote database. As long as an electronic device has access to the web, it has access to the data and the software programs to run it.

3) Cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, intelligence and more—over the Internet (“the cloud”) to offer faster innovation, flexible resources and economies of scale.

4) It’s called cloud computing because the information being accessed is found in “the cloud” and does not require a user to be in a specific place to gain access to it.This type of system allows employees to work remotely. Companies providing cloud services enable users to store files and applications on remote servers, and then access all the data via the internet.

5) The group of networked elements providing services need not be individually addressed or managed by users; instead, the entire provider-managed suite of hardware and software can be thought of as an amorphous cloud. You typically pay only for cloud services you use, helping lower your operating costs, run your infrastructure more efficiently and scale as your business needs change.