.

Glossary of terms

This page provides definitions and explanations of key terms related to the S1 system and to storage and data management in general.

ALUA

ALUA stands for Asymmetric Logical Unit Access and is implemented in storage systems that provides an asymmetric path to the storage resources.

In a storage system that supports ALUA, there are two types of paths to the storage resources: active/optimized paths and passive/non-optimized paths. The active/optimized paths are those that provide the best performance and are typically used for read and write operations. The passive/non-optimized paths are used as a backup or failover path when the active/optimized path fails.

ALUA is supported by a number of storage protocols, including iSCSI and Fibre Channel. The S1 system support ALUA in both iSCSI and Fibre Channel.

Appliance

An appliance is a pre-configured hardware and software storage system.

Asynchronous replication

Asynchronous replication copies data from a source system to a target system with a time delay. It doesn’t require immediate acknowledgment, allowing the source system to operate without interruptions. While it offers lower latency, there may be a slight time lag between the source and target systems, resulting in potential data inconsistencies. Asynchronous replication is commonly used when immediate data consistency is not critical or for long-distance replication.

Chassis (server node chassis)

A server node chassis is the physical enclosure that houses the internal hardware components of a server node. It provides protection, facilitates easy installation and maintenance, and includes openings for external components and cooling systems. Chassis designs and sizes vary depending on the form factor of the server node.

Consistency Group (CG)

A Consistency Group (CG) is a logical grouping of multiple logical volumes, typically used for the purposes of data protection and disaster recovery. All of the volumes in a CG are synchronized together to ensure data consistency across the volumes. This allows for faster and more efficient data recovery in the event of a disaster.

ESXi hypervisor

ESXi is a type-1 hypervisor developed by VMware. It is a bare-metal hypervisor that runs directly on server hardware, without the need for an underlying operating system. ESXi virtualizes the server’s hardware resources, such as CPU, memory, storage, and network interfaces, allowing multiple virtual machines (VMs) to run simultaneously. It provides tools for creating, configuring, and managing VMs, along with features for resource allocation, performance optimization, security, and scalability. ESXi is integrated with VMware’s vSphere ecosystem, including vCenter Server, for centralized management of hosts and VMs.

IQN

An iSCSI Qualified Name (IQN) is a unique identifier for an iSCSI initiator or target. It is a logical name that is not linked to an IP address. IQNs are used to identify iSCSI devices on a network and to allow them to communicate with each other.

The IQN format is defined in RFC 3720. It takes the form:

iqn.yyyy-mm.naming-authority:unique-name

Where:

  • yyyy-mm is the year and month that the naming authority was established.
  • naming-authority is the reverse DNS name of the naming authority.
  • unique-name is any string that is unique to the iSCSI device.

For example, the IQN for an iSCSI initiator on a computer named localhost that was established in January 2023 by the company Acme Corporation would be:

iqn.2023-01.acme.com:localhost

IQNs allows:

  • Unique identification of iSCSI devices on a network.
  • Communication between iSCSI devices.
  • Tracking the location of iSCSI devices.
  • Resolving troubleshoot problems on an iSCSI network.

IQNs are a critical part of iSCSI networks, and they are used by all major iSCSI vendors.

InfiniBand

InfiniBand is a high-speed network technology that is designed to provide high-bandwidth and low-latency communication between servers, storage devices, and other components in a data center.

InfiniBand is often used in high-performance computing (HPC) environments where large amounts of data need to be processed quickly, such as in scientific research or financial modeling. InfiniBand can provide data transfer rates of up to 400 Gbps, which makes it well-suited for these types of demanding workloads.

InfiniBand uses a switch-based topology, which allows for the creation of large-scale networks that can be easily expanded as needed. InfiniBand also supports features such as quality of service (QoS) and congestion control, which help to ensure that data is delivered quickly and reliably.

InfiniBand is typically used in conjunction with other storage networking technologies, such as iSCSI or Fibre Channel, to provide a comprehensive storage solution for enterprise environments.

IOPS

IOPS stands for Input/Output Operations Per Second. It is a measure of the speed of a storage device, such as a hard drive or solid-state drive (SSD). IOPS is calculated by dividing the number of input/output operations by the time it takes to complete those operations.

For example, if a storage device can complete 100 input/output operations in one second, it has an IOPS rating of 100.

IOPS is an important metric for determining the performance of a storage device. Higher IOPS ratings indicate faster storage devices. This is important for applications that require a lot of random access, such as database servers and web servers.

There are a number of factors that can affect the IOPS rating of a storage device, including the type of storage media, the size of the storage device, and the type of controller used.

Hard drives typically have lower IOPS ratings than SSDs. This is because hard drives have moving parts, which can slow down access times. SSDs, on the other hand, have no moving parts, which makes them faster.

The size of the storage device can also affect IOPS ratings. Larger storage devices typically have lower IOPS ratings than smaller storage devices. This is because larger storage devices have more platters, which can slow down access times.

The type of controller used can also affect IOPS ratings. Controllers that are designed for high-performance applications typically have higher IOPS ratings than controllers that are designed for general-purpose applications.

When choosing a storage device, it is important to consider the IOPS rating of the device. The IOPS rating will help you determine how well the device will perform for your specific needs.

iSCSI

iSCSI stands for Internet Small Computer System Interface, which is a storage networking protocol used to access and manage storage devices over an IP network. With iSCSI, storage devices such as hard disk drives and storage arrays can be connected to a network and accessed by servers as if they were locally attached storage devices. iSCSI allows block-level access to storage devices, which means that data is stored and retrieved in blocks rather than files.

iSCSI works by encapsulating SCSI commands into IP packets, which can then be transmitted over an Ethernet network. To access a remote iSCSI target device, the server initiates a connection to the target using an iSCSI initiator software component. The target responds to the initiator’s request and provides access to the storage device.

iSCSI is often used in environments where traditional Fibre Channel (FC) SANs are too expensive or complex to implement. iSCSI provides a lower-cost alternative to FC and can be easily integrated into existing Ethernet networks. However, iSCSI typically has lower performance compared to FC due to the overhead associated with encapsulating SCSI commands into IP packets.

Fibre Channel

Fibre Channel (FC) is a high-speed storage networking protocol that enables the transfer of large amounts of data between servers and storage devices.

FC provides a dedicated point-to-point connection between servers and storage devices, which allows for very high data transfer rates and low latency. FC can support data transfer rates of up to 128 Gbps, which makes it well-suited for high-performance computing environments.

FC works by encapsulating SCSI commands into FC frames, which are then transmitted over a fiber optic cable. The FC protocol provides features such as flow control, error detection and recovery, and zoning, which allow for reliable and secure data transfers between servers and storage devices.

FC is often used in storage area networks (SANs) where multiple servers need to access the same storage devices. SANs provide a centralized storage infrastructure that can be shared by multiple servers, which helps to improve storage utilization and simplify storage management.

FC can be expensive to implement and often requires specialized hardware, such as FC host bus adapters (HBAs) and FC switches. Additionally, In addition, FC requires a separate network infrastructure from Ethernet-based networks, which can add complexity to the overall IT environment.

HA interconnect

A high availability (HA) connection refers to the setup of a redundant connection between two nodes in a two-node appliance, ensuring that if one node fails, the other node can take over seamlessly to maintain uninterrupted service. This connection provides redundancy and fault tolerance in the event of a failure or outage, making the overall system more reliable and available. In an HA configuration, the two nodes work together to provide continuous service, and the HA connection is often implemented using technologies such as load balancers, failover clusters, or redundant network links.

HBA (Host Bus Adapter)

A Host Bus Adapter (HBA) is a hardware device that allows a computer or server to connect to a storage device, such as a hard drive or a disk enclosure, over a high-speed network such as Fibre Channel or SAS (Serial Attached SCSI). The HBA acts as an interface between the host system and the storage device, offloading processing tasks from the main CPU to improve performance and reduce load on the system. HBAs are commonly used in enterprise storage environments to provide fast, reliable access to large amounts of data, and they are available in a variety of form factors and speeds to suit different needs. When used to connect to a disk enclosure, an HBA allows multiple hard drives or solid-state drives to be housed in a single unit, providing scalable and high-capacity storage solutions.

Network Address Authority (NAA)

NAA stands for “Network Address Authority” or “Network Addressable Asset.” In the context of storage and virtualization, NAA refers to the unique identifier assigned to a storage device or logical unit (LUN) within a storage area network (SAN) environment.

The NAA is a globally unique identifier that helps identify and differentiate storage devices, allowing them to be uniquely addressed and accessed within the SAN infrastructure. It is typically assigned by the storage controller or SAN fabric during the device initialization process.

The NAA is represented as a string of alphanumeric characters, usually in a format specified by the SCSI (Small Computer System Interface) standard, which is commonly used in SAN environments. The NAA provides a standardized method for identifying and referencing storage devices across different systems and vendors.

In virtualization platforms like VMware ESXi, the NAA is used to uniquely identify and manage storage devices presented to virtualization hosts. It helps identify specific LUNs or disks, enabling administrators to perform operations such as provisioning, mapping, and monitoring of storage resources within the virtualization environment.

By using the NAA, administrators can ensure accurate and consistent identification of storage devices across different systems, simplify storage management, and maintain data integrity within the SAN infrastructure.

Network File System NFS (#nfs)

A Network File System (NFS) data store is a distributed file system protocol that allows remote file access and sharing over a network. It enables a client to access files and directories on a remote server as if they were local to the client’s own system.

In the context of virtualization and storage, an NFS data store refers to a shared storage resource accessed by virtualization hosts or hypervisors such as VMware ESXi or Citrix XenServer. It provides a centralized storage location for virtual machine (VM) files, including virtual disks, configuration files, and snapshots.

By utilizing NFS data stores, multiple virtualization hosts can access the same storage resource simultaneously, allowing for features like VM migration, high availability, and load balancing. NFS data stores offer advantages such as scalability, flexibility, and ease of management, as they provide a unified storage platform for virtualization environments.

Network Qualified Name (NQN)

A Network Qualified Name (NQN) is a 128-bit unique identifier for an iSCSI endpoint such as an initiator or target.

The purpose of a NQN is to provide a unique identifier for an iSCSI device that is independent of the IP address of the device. This allows iSCSI devices to be uniquely identified and located on a network, even if their IP addresses change.

The format of a NQN is:

nqn.<namespace>.<naming-authority>.<unique-name>
  • Namespace: A 16-bit identifier that specifies the type of iSCSI device.
  • Naming Authority: A 64-bit identifier that specifies the organization that issued the NQN.
  • Unique Name: A 48-bit identifier that is unique to the iSCSI device.

Network Qualified Name provides:

  • Unique identification of iSCSI devices on a network.
  • Communication between iSCSI devices.
  • Tracking the location of iSCSI devices.
  • Identifies and fixes problems on an iSCSI network.

NQNs are a critical part of iSCSI networks, and they are used by all major iSCSI vendors.

NVMe (Non-Volatile Memory Express)

NVMe stands for Non-Volatile Memory Express, a storage interface protocol designed to take advantage of the performance and low latency of solid-state drives (SSDs). It was developed to replace the older AHCI (Advanced Host Controller Interface) protocol, which was designed for spinning hard disk drives (HDDs) and is less optimized for SSDs. NVMe uses a streamlined command set and a high-speed PCIe (Peripheral Component Interconnect Express) connection to enable faster data transfer rates and lower latency compared to AHCI. NVMe SSDs are available in various form factors, including U.2, M.2, and PCIe add-in cards, and are compatible with many modern operating systems.

NVMe over Fabrics (NVMe-oF)

NVMe-oF stands for NVMe over Fabrics, which is a technology that allows Non-Volatile Memory Express (NVMe) storage devices to be accessed over a network using standard sto rage protocols such as Ethernet, Fibre Channel, or InfiniBand.

NVMe is a high-performance storage protocol that is designed to take advantage of the low latency and high bandwidth of solid-state drives (SSDs). However, NVMe was originally designed to work over a PCIe interface, which limits its use to local storage devices.

NVMe-oF extends the benefits of NVMe to networked storage devices by encapsulating NVMe commands and data into standard network protocols. This allows NVMe storage devices to be accessed over a network as if they were local storage devices, without the need for specialized hardware or software.

NVMe-oF provides a number of benefits for enterprise storage environments. It enables the use of high-performance NVMe storage devices in networked storage environments, which can help to improve storage performance and reduce storage costs. NVMe-oF also provides flexibility in terms of storage deployment and management, allowing organizations to take advantage of the benefits of NVMe without requiring significant changes to their existing storage infrastructure.

Latency

Latency is the time it takes for data to travel from one point to another. In the context of computer networking, latency is the time it takes for a data packet to travel from one device to another. Latency is measured in milliseconds (ms).

Low latency is important for applications that require real-time communication, such as online gaming and video conferencing. High latency can cause delays in communication, which can lead to poor performance and a negative user experience.

There are a number of factors that can affect latency, including the distance between the devices, the type of network, and the amount of traffic on the network.

There are a number of ways to reduce latency, including:

  • Using a high-speed network
  • Using a direct connection between the devices
  • Reducing the amount of traffic on the network

Latency is an important factor to consider when designing and implementing computer networks. By understanding the factors that affect latency and how to reduce it, you can improve the performance of your network and provide a better user experience.

IOSIZE

IOSIZE stands for Input/Output Size. It is the size of the data that is being read or written to a storage device in a single operation. IOSIZE can have a significant impact on the performance of a storage device.

Larger IOSIZEs can improve performance because they can take advantage of the caching capabilities of the storage device. Caching is a technique that stores data in memory so that it can be accessed more quickly. When a large IOSIZE is used, the storage device can cache the entire data in memory, which can significantly reduce the time it takes to access the data.

Smaller IOSIZEs can degrade performance because they can lead to more disk fragmentation. Disk fragmentation occurs when data is stored in non-contiguous blocks on the disk. This can slow down access times because the disk head has to move around more to access the data.

The optimal IOSIZE for a storage device will vary depending on the type of storage device, the workload, and the caching capabilities of the storage device. In general, larger IOSIZEs will improve performance for most workloads.

Here are some tips for optimizing IOSIZE:

  • Use the largest IOSIZE that is supported by the storage device and the workload.
  • Avoid using small IOSIZEs, especially for workloads that involve a lot of random access.
  • Use a caching device, such as a RAID controller or a solid-state drive, to improve the performance of small IOSIZEs.

Object storage

Object storage is a scalable, cost-effective, and flexible way to store large amounts of unstructured data. It is a data storage architecture that manages data as objects. An object is a unit of data that consists of:

  • Data: The actual data being stored.
  • Metadata: Data about the data, such as its name, size, and creation date.
  • Object identifier: A unique identifier for the object.

Object storage is typically stored in a flat namespace, meaning that it is not organized into folders or directories. This makes it easy to store and access large amounts of data.

Object storage is often used to store unstructured data, such as images, videos, and audio files. It can also be used to store structured data, such as database records and log files.

Benefits of using object storage:

  • Scalability: Object storage can be easily scaled up or down to meet changing storage needs.
  • Cost-effectiveness: Object storage is typically less expensive than other storage options, such as file storage and block storage.
  • Flexibility: Object storage can be accessed from anywhere, making it ideal for cloud-based applications.
  • Durability: Object storage is typically stored in multiple locations, making it less vulnerable to data loss.

Use cases for object storage:

  • Web hosting: Object storage can be used to store web pages, images, and other media files.
  • Backup and disaster recovery: Object storage can be used to store backups of data, making it easy to restore data in the event of a disaster.
  • Content delivery networks (CDNs): Object storage can be used to store static content, such as images and videos, which can then be delivered to users more quickly.
  • Big data analytics: Object storage can be used to store large amounts of data for big data analytics.

Pool

A pool is a logical container that combines and manages physical drives or other storage resources, providing an abstraction layer between the physical storage and the logical volumes. It enables IT to allocate and manage storage resources more efficiently by consolidating the available storage into a unified and easily-manageable entity.

Profile mode

In a file system, a profile mode is a way of managing changes to the file system. It is a type of journaling mode that only writes metadata to the journal. Data is written directly to disk, and the journal is only used to track changes to the file system.

In the S1 system you can write profile mode in either ordered or journal way.

  • In ordered mode, all data is written to the journal before it is written to disk. This provides the highest level of safety, as it ensures that all data is committed to disk even if the system crashes before the write is complete. However, ordered mode can also be the slowest mode, as it requires all data to be written to the journal before it can be written to disk.
  • In journal mode, only metadata is written to the journal. Data is written directly to disk, and the journal is only used to track changes to the file system. This mode is faster than ordered mode, but it is also less safe. If the system crashes before the write is complete, any data that has not been written to disk will be lost.

The best way to write profile mode depends on your needs. If you need the highest level of safety, then ordered mode is the best choice. If you need the best performance, then journal mode is the best choice. If you are unsure, then you can use a combination of ordered and journal mode. For example, you can write all metadata in ordered mode and all data in journal mode. This will provide a good balance of safety and performance.

Semisynchronous replication

Semisynchronous replication combines aspects of synchronous and asynchronous replication. It requires acknowledgment from at least one replica before considering a write operation complete, striking a balance between data consistency and performance. It provides increased data reliability compared to asynchronous replication but with lower latency impact than fully synchronous replication.

Shared storage

Shared storage is a storage system that is accessed by multiple users or computers. It stores all of the files in a centralized pool of storage and allows multiple users to access them at once. In the S1 system shared storage volumes are implemented through the use of Network Attached Storage (NAS).

Shared storage offers a number of benefits over traditional local storage, including:

  • Centralized management: Shared storage can be managed from a central location, which makes it easier to manage large amounts of data.
  • Scalability: Shared storage can be scaled to meet the needs of growing businesses.
  • Performance: Shared storage can provide high performance for applications that require access to large amounts of data.
  • Security: Shared storage can be more secure than traditional local storage because it can be centrally managed and backed up.

Single-node appliance

A single-node appliance is a type of storage system that consists of a single node server that provides storage services to clients. This type of appliance is typically used in small or medium-sized environments where high availability and redundancy are not critical requirements. Since there is only one node, there is no automatic failover or load balancing capability, which means that any failures could result in a service interruption until the problem is resolved.

SMART (Self-Monitoring Analysis and Reporting Technology)

SMART, which stands for Self-Monitoring, Analysis, and Reporting Technology, is a feature that is built into modern hard disk drives (HDDs) and solid-state drives (SSDs). SMART provides information about the health and performance of the drive by monitoring various parameters, such as the number of read/write errors, reallocated sectors, temperature, and spin-up time. A SMART drive, therefore, is a disk drive that supports the SMART feature and can report this information to the host system. This information can be used to detect and predict drive failures, which can help prevent data loss and downtime. SMART can be accessed and monitored through various software tools, such as diagnostic utilities and system monitoring software.

Storage Platform

A hardware and software system that provides a set of tools and infrastructure for managing and accessing data. Typically it includes one or more storage engines, networking components, security features, management tools, and physical hardware components.

Synchronous replication

Synchronous replication is a real-time data duplication method where data is immediately replicated from a source system to a target system. It ensures strong data consistency by waiting for confirmation before completing write operations. This guarantees that both systems are always synchronized and provides reliable data integrity. However, it can introduce additional latency and performance impact due to the confirmation process.

RDMA

RDMA (Remote Direct Memory Access) is a technology that enables data to be transferred directly between the memory of two computers or devices without involving the operating system and CPU of either device.

Traditional network communication requires the involvement of the CPU and operating system on both the sending and receiving devices, which can introduce latency and reduce performance. With RDMA, data is transferred directly between the memory of two devices, which can significantly reduce latency and improve network performance.

RDMA is often used in high-performance computing environments where low-latency, high-bandwidth communication is critical, such as in scientific research, financial modeling, or real-time data analytics. RDMA is also used in storage networking environments, such as in Storage Area Networks (SANs), to improve storage performance and reduce latency.

RDMA can be implemented using a number of different networking technologies, such as InfiniBand or Ethernet. To use RDMA, both the sending and receiving devices must support the same RDMA protocol and the necessary network infrastructure must be in place.

Replication floating IP

A replication floating IP is a virtual IP address that is used to replicate data between two systems. When a logical volume is replicated, the data is copied from the source system to the target system. The replication floating IP is used to connect the two systems and to ensure that the data is synchronized.

Throughput

Throughput is a measure of how much data can be transferred in a given amount of time. It is typically measured in bits per second (bps), as in megabits per second (Mbps) or gigabits per second (Gbps).

Throughput is important for many applications, such as file transfers, streaming media, and online gaming. It is also important for businesses that need to transfer large amounts of data, such as cloud computing providers and content delivery networks.

There are a number of factors that can affect throughput, including the type of network, the type of connection, and the amount of traffic on the network.

To improve throughput, you can:

  • Use a faster network connection, such as a gigabit Ethernet connection.
  • Use a direct connection between the devices, such as a point-to-point connection.
  • Reduce the amount of traffic on the network, such as by using a content delivery network.

Throughput is an important metric for measuring the performance of a network. By understanding the factors that affect throughput, you can improve the performance of your network and provide a better user experience.

Trimming

Trimming a file system is the process of telling a solid-state drive (SSD) that blocks of data that are no longer in use can be erased. This can improve the performance of the SSD by allowing it to more efficiently manage its storage space.

When a file is deleted from an SSD, the data that makes up the file is not actually erased from the drive. Instead, the file is marked as deleted and the space that the file was using is marked as available. This means that the data from the deleted file can still be read by the SSD, even though the file itself is no longer visible to the user.

When an SSD needs to write new data, it will first look for blocks of data that are not in use. If there are no blocks of free space available, the SSD will have to erase a block of data that is currently in use. This process is called garbage collection.

Garbage collection can be a slow process, especially if the SSD is already close to full. Trimming a file system tells the SSD that certain blocks of data are no longer in use, which can speed up the garbage collection process.

Here are some of the benefits of trimming a file system:

  • Improves performance: Trimming a file system can improve the performance of an SSD by allowing it to more efficiently manage its storage space.
  • Prolongs the life of the SSD: Trimming a file system can prolong the life of an SSD by reducing the number of times that the drive has to erase data.
  • Frees up space: Trimming a file system can free up space on an SSD by marking blocks of data that are no longer in use as available.

Two-node appliance

A two-node appliance is a type of storage system that consists of two nodes (or servers) connected to each other for high availability (HA) purposes. In this configuration, the two nodes are usually configured to provide redundancy, with one node serving as the primary node while the other acts as a standby node ready to take over in the event of a failure. This type of appliance provides a highly reliable and scalable storage solution with automatic failover and load balancing capabilities.

vCenter client

The vCenter Client is a graphical interface provided by VMware for managing vSphere environments. It allows administrators to configure, monitor, and administer resources such as virtual machines, networks, and storage. It provides real-time monitoring, task logging, and supports plug-ins for extended functionality.

vCenter server

vCenter Server is VMware’s centralized management platform for vSphere virtualization. It offers key features like centralized management, resource pooling, VM lifecycle management, high availability, performance monitoring, security, access control, and automation. It helps administrators streamline operations, improve resource utilization, ensure availability, and simplify virtualized environment management.

  • Centralized Management: vCenter Server acts as a central management point for vSphere environments. It allows administrators to control and monitor multiple ESXi hosts and their associated VMs from a single interface.

  • Resource Pooling: vCenter Server enables resource pooling by aggregating the computing, storage, and networking resources of multiple ESXi hosts into logical clusters. This allows for efficient resource allocation and management across the infrastructure.

  • VM Lifecycle Management: vCenter Server provides tools for managing the complete lifecycle of VMs, including VM provisioning, deployment, cloning, and templates. It simplifies tasks such as creating new VMs, adjusting resource allocation, and managing VM snapshots.

  • High Availability and Fault Tolerance: vCenter Server offers features like vSphere High Availability (HA) and vSphere Fault Tolerance (FT) for ensuring the availability of VMs. HA automatically restarts VMs on different hosts in the event of a host failure, while FT provides continuous availability with synchronized VM replicas.

  • Performance Monitoring: vCenter Server includes performance monitoring and reporting capabilities, allowing administrators to monitor resource utilization, track performance trends, and identify bottlenecks. It provides insights into CPU, memory, storage, and network usage across the infrastructure.

  • Security and Access Control: vCenter Server implements security measures to control access and ensure the integrity of the virtualized environment. It supports role-based access control (RBAC), allowing administrators to define granular permissions and manage user access to vSphere resources.

  • Automation and Orchestration: vCenter Server integrates with automation and orchestration tools, such as VMware vRealize Automation and vRealize Orchestrator, enabling automated provisioning, policy-based management, and workflow automation.

vCenter Server is a critical component for managing complex vSphere deployments, providing administrators with a centralized platform to streamline operations, improve resource utilization, ensure high availability, and simplify the management of virtualized environments.

vJBOD (virtual Just a Bunch of Disks)

vJBOD refers to a specific implementation of virtual Just a Bunch of Disks technology. StorONE’s vJBOD technology enables the creation of a shared storage pool that can be accessed by multiple hosts or virtual machines, providing high performance and availability. It uses advanced algorithms and data placement techniques to distribute data across multiple disks efficiently and optimize performance. As a result, it provides a flexible and scalable storage solution that can be easily customized to meet specific needs for managing large amounts of data across multiple hosts or virtual machines.

Virtual Machine File System

A Virtual Machine File System (VMFS) is a file system developed by VMware for virtualization environments. It provides a dedicated storage platform for VMware ESXi hosts and virtual machines (VMs).

  • VMFS data stores are logical storage containers created on physical storage devices.
  • They store virtual machine disk files, configuration files, snapshots, and templates.
  • VMFS supports shared access, allowing multiple ESXi hosts to simultaneously access the same virtual machine files.
  • It offers high-performance I/O and caching mechanisms optimized for virtualized workloads.
  • VMFS data stores can scale by spanning multiple disks or LUNs, accommodating growing infrastructures.
  • Thin provisioning enables efficient disk space utilization by dynamically allocating storage as needed.
  • Snapshot and cloning features allow for point-in-time copies and replication of VMs.
  • The management and control of VMFS data stores are facilitated by VMware vSphere, offering a centralized platform for administration. The S1 system is capable of provisioning VMFS data storage by establishing a connection with the vSphere environment.

Virtual Storage Container (VSC)

A VSC, or Virtual Storage Container, is a logical storage environment created within the StorONE Engine. It is a self-contained entity that can group together one or more storage resources, such as pools or consistency groups, to create a unified storage solution for an entire data center. Unlike a pool, which provides an abstraction layer between the physical storage drives and the logical volumes, a VSC is a higher-level container that can include additional storage resources and features, such as snapshots and tiering policies. VSCs enable IT to easily provision and manage their storage resources in a more flexible and efficient manner.

vSphere

vSphere is VMware’s suite of server virtualization products that includes the ESXi type-1 hypervisor and vCenter management software. Its major components are ESXi, vCenter Server, vSphere Client, SDKs, VMFS, Virtual SMP, vMotion, Storage vMotion, HA, DRS, Storage DRS, Fault Tolerance, VDS, and Host Profiles.

WWN

A World Wide Name (WWN) or World Wide Identifier (WWID) is a unique identifier used in storage technologies including Fibre Channel, Parallel ATA, Serial ATA, SCSI and Serial Attached SCSI (SAS).

WWNs are 64-bit numbers that are assigned to each storage device by the manufacturer. They are used to uniquely identify devices on a storage network, and to allow devices to communicate with each other.

There are two types of WWNs:

  • World Wide Node Name (WWNN): A WWNN identifies a storage device, such as a Fibre Channel HBA or a storage controller.
  • World Wide Port Name (WWPN): A WWPN identifies a specific port on a storage device.

WWNs are important for several reasons:

  • They allow devices to be uniquely identified on a storage network.
  • They allow devices to communicate with each other.
  • They can be used to track the location of storage devices.

WWNs are a critical part of storage networks, and they are used by all major storage vendors.

  • WWNs are assigned by the Institute of Electrical and Electronics Engineers (IEEE).

  • WWNs are 64-bit numbers that are represented as 16 hexadecimal digits.

  • WWNs are divided into two parts: the Network Address Authority (NAA) and the vendor-specific identifier.

  • The NAA is a 4-bit code that identifies the type of network that the WWN is used on.

  • The vendor-specific identifier is a 24-bit code that identifies the manufacturer of the device.

  • WWNs can be used to track the location of storage devices by using a process called zoning.

  • WWNs can be used to troubleshoot problems on a storage network by using a process called discovery.

    Last updated on 8 Jan 2023
    Published on 8 Jan 2023