Business 2.0

June 6, 2007

Conquering the Challenges of Managing a Data Center

Filed under: Annoucements, Articles, General News, Market Research, Storage — Yogesh Hublikar @ 5:44 pm

 

Ensuring high availability is the biggest challenge faced by CIOs while managing their data center and about 50% of them have outsourced some of the monitoring and management to a third party. Here we explore these and other data center mgmt challenges in detail  

 

Sanjay Majumder Over the last few years, we have seen the number of data centers growing at an exceptional rate. The term data center brings to mind a picture of a highly secure room spread over acres of land, with organized cabling infrastructure, extreme cooling and dedicated power house. Well, in reality things are slightly different. A data center, in simple terms is nothing but a place that holds your data, IT infrastructure and applications. In the early days, there was no term called ‘data center’, as such. There were server rooms where all the servers were kept and managed by an expert IT team. With the ‘Dot com’ boom, emphasis on datacenter has risen at a phenomenal rate. Initially, these data center facilities were constructed by ISPs for hosting applications, servers etc, for their clients. These days, each and every enterprise has its own data center in place. But on the other hand the complexity of managing these data centers efficiently has also become a challenge for the CIO. Therefore, we decided to find out the key data center management issues faced by CIOs and try to find answers for them. For this, we interacted with 28 key CIOS from across the country.

Understanding the key challenges
52 % of the respondents said that ensuring high availability was the most challenging task for them. Around 24 % respondents said capacity planning was their key concern. While 10 % of the respondents primarily faced issues like keeping costs under control, the remaining 14 % said ensuring optimal utilization of resources was their prime challenge. To ensure high availability, you need to have redundant power backup. Secondly, data center should adopt network load balancing along with DR so that stress on the data center can be minimized. For the critical applications running in your data center, also you should have an automatic fail-over setup. Build redundancy into all the possible elements that can affect high availability, for example, switches and routers. If you think, you don’t have enough trained staff to provide high availability then better outsource the management of your critical apps. To combat the issue of capacity planning, one of the options suggested by some of our respondents was server consolidation. One of the requirements for doing a successful server consolidation is monitoring your IT resources and then formulating the strategy. Broadly speaking, server consolidation translates into IT resource management. If you think that your current data canter can’t take load of your upcoming projects then only you should revamp your data center (DC) or if you don’t have enough time and budget then outsourced DC would be a better option. Virtualization is another solution for capacity planning. It taps the unused processing power of the servers in your data center. Moreover, with virtualization, you can add more apps in the same environment in order to utilize unused server power for efficient resource management. This will also help other concerns like ensuring optimal utilization of resources and keeping costs under control.
 

Key management concerns
Power concerns top the list, followed by crash and recovery. There are also connectivity related, cooling and data backup issues. Let’s take these issues one by one. Yes, power is the basic need for a huge data center, moreover as your data center grows you would require more electricity in order to power your infrastructure. Here also, capacity planning plays a major role. One has to evaluate the present and future power requirements for a data center. Then deploy a power conditioning system for your DC, which includes UPS and indigenous power generation unit. These days, many organizations have their own power generation units for powering their data center grid. Next you may face crash recovery issues. For instance, if any of your mission critical applications fail due to hardware failure then what would be your recovery strategy to bring back the application with minimum down time. In that case, keep your hardware and spares ready in stock, so that you can just replace the hardware and host the application on a new piece of hardware. Connectivity issues are another common area of concern that CIOs face, while managing their data centers. In fact, one interesting aspect that came up from our survey was availability of network equipment. What if one switch fails somewhere in your large data center? How quickly would you be abe to find and rectify it before something disasterous happens? For this, you need real time monitoring of the networking equipment and failover support for the most critical ones. Data centers have a lot of servers and other equipment that generate huge amount of heat as well. As the temperature rises, it adversely affects the performance of the data center, plus chances of wear and tear of equipment also increase. Therefore, cooling plays a very important part in your DC. Before building a DC, you need to analyze your cooling requirements and design your DC accordingly. For your existing data center, you should put in temperature monitoring and control equipment. One of the respondents said that for additional cooling on demand, you can also deploy emergency chillers.

 

© Source: PCQuest  

 

Conquering the Challenges of Managing a Data Center

Filed under: Annoucements, Articles, General News, Market Research, Storage — Yogesh Hublikar @ 5:44 pm

 

Ensuring high availability is the biggest challenge faced by CIOs while managing their data center and about 50% of them have outsourced some of the monitoring and management to a third party. Here we explore these and other data center mgmt challenges in detail
 

 

Sanjay Majumder Over the last few years, we have seen the number of data centers growing at an exceptional rate. The term data center brings to mind a picture of a highly secure room spread over acres of land, with organized cabling infrastructure, extreme cooling and dedicated power house. Well, in reality things are slightly different. A data center, in simple terms is nothing but a place that holds your data, IT infrastructure and applications. In the early days, there was no term called ‘data center’, as such. There were server rooms where all the servers were kept and managed by an expert IT team. With the ‘Dot com’ boom, emphasis on datacenter has risen at a phenomenal rate. Initially, these data center facilities were constructed by ISPs for hosting applications, servers etc, for their clients. These days, each and every enterprise has its own data center in place. But on the other hand the complexity of managing these data centers efficiently has also become a challenge for the CIO. Therefore, we decided to find out the key data center management issues faced by CIOs and try to find answers for them. For this, we interacted with 28 key CIOS from across the country.


Understanding the key challenges
52 % of the respondents said that ensuring high availability was the most challenging task for them. Around 24 % respondents said capacity planning was their key concern. While 10 % of the respondents primarily faced issues like keeping costs under control, the remaining 14 % said ensuring optimal utilization of resources was their prime challenge. To ensure high availability, you need to have redundant power backup. Secondly, data center should adopt network load balancing along with DR so that stress on the data center can be minimized. For the critical applications running in your data center, also you should have an automatic fail-over setup. Build redundancy into all the possible elements that can affect high availability, for example, switches and routers. If you think, you don’t have enough trained staff to provide high availability then better outsource the management of your critical apps. To combat the issue of capacity planning, one of the options suggested by some of our respondents was server consolidation. One of the requirements for doing a successful server consolidation is monitoring your IT resources and then formulating the strategy. Broadly speaking, server consolidation translates into IT resource management. If you think that your current data canter can’t take load of your upcoming projects then only you should revamp your data center (DC) or if you don’t have enough time and budget then outsourced DC would be a better option. Virtualization is another solution for capacity planning. It taps the unused processing power of the servers in your data center. Moreover, with virtualization, you can add more apps in the same environment in order to utilize unused server power for efficient resource management. This will also help other concerns like ensuring optimal utilization of resources and keeping costs under control.
 

Key management concerns
Power concerns top the list, followed by crash and recovery. There are also connectivity related, cooling and data backup issues. Let’s take these issues one by one. Yes, power is the basic need for a huge data center, moreover as your data center grows you would require more electricity in order to power your infrastructure. Here also, capacity planning plays a major role. One has to evaluate the present and future power requirements for a data center. Then deploy a power conditioning system for your DC, which includes UPS and indigenous power generation unit. These days, many organizations have their own power generation units for powering their data center grid. Next you may face crash recovery issues. For instance, if any of your mission critical applications fail due to hardware failure then what would be your recovery strategy to bring back the application with minimum down time. In that case, keep your hardware and spares ready in stock, so that you can just replace the hardware and host the application on a new piece of hardware. Connectivity issues are another common area of concern that CIOs face, while managing their data centers. In fact, one interesting aspect that came up from our survey was availability of network equipment. What if one switch fails somewhere in your large data center? How quickly would you be abe to find and rectify it before something disasterous happens? For this, you need real time monitoring of the networking equipment and failover support for the most critical ones. Data centers have a lot of servers and other equipment that generate huge amount of heat as well. As the temperature rises, it adversely affects the performance of the data center, plus chances of wear and tear of equipment also increase. Therefore, cooling plays a very important part in your DC. Before building a DC, you need to analyze your cooling requirements and design your DC accordingly. For your existing data center, you should put in temperature monitoring and control equipment. One of the respondents said that for additional cooling on demand, you can also deploy emergency chillers.

 

© Source: PCQuest  

 

June 4, 2007

‘The future need is for mobile business management’

Filed under: Articles, General News, Market Research, Storage — Yogesh Hublikar @ 4:33 pm

‘The future need is for mobile business management’
N Chandrasekaran, GM- IT, Ashok Leyland speaks about challenges from vendors and service providers, and IT infrastructure plans including a DR center in Hosur
 

 

Previous Articles >>

Kingston riding the memory wave
“HP’s Virtual Connect leapfrogs competition”

 

 

N Chandrasekaran, who spearheads the IT strategies at Ashok Leyland, the Indian flagship of the Hinduja Group, is planning to roll out, with the help of HP, a Score Card system based on Oracle to address performance measurement and incentivization of dealerships. In an interview with VOICE&DATA, Chandrasekaran, spoke about challenges from vendors and service providers, while sharing plans relating to IT infrastructure including a disaster recovery center at HosurAshok Leyland has been investing heavily in IT infrastructure. What are your future investments?
We have built our own ERP system-the backbone for the business. The investment in these and related support mechanisms is to the tune of Rs 60–70 crore. This has helped us in establishing a framework that is scalable and adaptable, in tune with business needs and technology advancements. While we consolidated our central architecture with Compaq Alpha servers, we are now in the process of migrating to the HP Itanium platform. We are also progressing with our disaster recovery center at Hosur, which would be commissioned soon.
We have built our own ERP system-the backbone for the business. The investment in these and related support mechanisms is to the tune of Rs 60–70 crore. This has helped us in establishing a framework that is scalable and adaptable, in tune with business needs and technology advancements. While we consolidated our central architecture with Compaq Alpha servers, we are now in the process of migrating to the HP Itanium platform. We are also progressing with our disaster recovery center at Hosur, which would be commissioned soon. What will this IT infrastructure achieve?
Our IT plans emanate from business strategy and our IT initiatives are designed as one of the key enablers in our accelerated journey towards market leadership. The enterprise agility demands can only be achieved through process refinements, measurement and management of enterprise KPIs, and enhancing skills of people to perform their roles effectively and easily. This is key to the transformation needed. Being a heavily knowledge-based industry in terms of the engineering strengths required, we give utmost importance to information integrity and confidentiality, and provide technology enablers and physical processes that catalyze this. A careful blend of creativity and confidentiality is the formula for success. How unique is your ERP system?

Enterprise transaction processes cover our end-to-end needs of business, used by over 3,000 users, with an average concurrency of 1,100 and peak concurrency of 1,500. With multiple manufacturing units focusing on different activities, it is a challenge to have the ERP address business and operational needs effectively and efficiently, and this is something we have ensured in the way we have architected and integrated. We are currently rolling out what we have named “Customer Connect”-an initiative that provides an integrated business solution with CRM and Dealer Management processes. The PLM solution based on Matrix One that we are implementing is an effective aid in providing customer value enhancements, and is integrated with the other initiatives towards achieving this.What is the focus of the disaster recovery back-up project? Can it ensure real time recovery?
We are in the process of commissioning a disaster recovery center at Hosur. The main objective is to provide business continuity to prevent total disruption of operations. The operational switchover to a mirror infrastructure has been created to address up to the last committed transaction before the failure of the first data center at Ennore. In order to ensure the health of the backup center, a planned periodic switchover has been made part of the operational strategy
We are in the process of commissioning a disaster recovery center at Hosur. The main objective is to provide business continuity to prevent total disruption of operations. The operational switchover to a mirror infrastructure has been created to address up to the last committed transaction before the failure of the first data center at Ennore. In order to ensure the health of the backup center, a planned periodic switchover has been made part of the operational strategy What are your innovations in order to offer better customer experience?
The centralized Customer Connect system we have put in place is first a customer centric solution that takes into account not just the routine transaction processes but the interaction aspects as well. This helps us understand customer needs better. HP is helping us in rolling out the solution across dealerships covering vehicles, service and parts operations. The rollout progressively planned in the next twelve months will provide our customers with uniform experience. We are also rolling out, with the help of HP, a Score Card system based on Oracle to address performance measurement and incentivization of our dealerships. Also the system would provide us valuable feedback on how we could improve our support level to our dealerships in effectively addressing customer needs. Ashok Leyland received BS7799 certification for its Information Security Management System. What is the major impact?We became the first auto manufacturer in
India to receive the world-renowned BS7799 certification for our Information Security Management System (ISMS). Standardization Testing and Quality Certification (STQC) Directorate, a globally recognized certifying authority of the Government of India, under the Ministry of Information Technology, has certified our data center at Ennore (EDC), the system headquarters of the company. Information is an asset, which like other important business assets, adds value to an organization and consequently needs to be suitably protected. In line with Ashok Leyland’s tradition of imbibing best practices, we went in for this certification, which is not really a pre-requisite for a manufacturing company. Apart from identifying and minimizing security threats, the certification brings tremendous credibility.
What are the major challenges?
While there has been an order of magnitude improvement in terms of connectivity infrastructure, it is still a major challenges when we approach service providers for linking every nook and corner of the country for sustaining mandatory business operations such as in the dealership processes. We are using both MPLS and VSAT technologies based on business volume, transaction process needs and infrastructure availability from service providers. The other challenge is the mobility of people and the resulting skill impact this has in the dealerships. We are building business process as well as system user training programs for use by dealerships, which would largely mitigate the risks associated with the learning curve.
What are your demands?
Today, technology expertise is easier to get or supplement through outsourcing partnerships. It is the process understanding and ability to bring about process refinements through IT enablers important for enterprise success. Software engineering processes and discipline are hygiene requirements in this scenario and we should be able to take them for granted. People mobility is another ongoing tussle even with large enterprises and this is best addressed by proper knowledge management techniques, process orientation in approach and continuous skill upgradation. The future need is “mobile” business management and this has to come from both service provider and vendors. Single view of operations across the enterprise, 24×7 availability, aligning IT with business dynamics, resource optimization and utilization, integrated architecture ensuring secure infrastructure, data and applications and an IT framework that provides standardization, modularity and scalability and consolidation to address new initiatives are business imperatives.
Have you achieved your goals?
We have improved manageability and reduced operating costs by consolidating disparate systems and processes onto single centralized infrastructure. This has enabled easier rollout of new initiatives, brought about resource optimization and balanced utilization, increased productivity by automating workflow and triggering actionable system driven tasks. Main business benefits include increased productivity, process standardization, and transparency with suppliers and dealerships resulting in partnership approach, optimized inventory control, re-deployment of manpower and an infrastructure geared to support inorganic growth. As far as achieving goals are concerned we always aim to be better than what we are, and this is hence a continuing journey.
Baburajan K
Source: Voice&Data
While there has been an order of magnitude improvement in terms of connectivity infrastructure, it is still a major challenges when we approach service providers for linking every nook and corner of the country for sustaining mandatory business operations such as in the dealership processes. We are using both MPLS and VSAT technologies based on business volume, transaction process needs and infrastructure availability from service providers. The other challenge is the mobility of people and the resulting skill impact this has in the dealerships. We are building business process as well as system user training programs for use by dealerships, which would largely mitigate the risks associated with the learning curve.Today, technology expertise is easier to get or supplement through outsourcing partnerships. It is the process understanding and ability to bring about process refinements through IT enablers important for enterprise success. Software engineering processes and discipline are hygiene requirements in this scenario and we should be able to take them for granted. People mobility is another ongoing tussle even with large enterprises and this is best addressed by proper knowledge management techniques, process orientation in approach and continuous skill upgradation. The future need is “mobile” business management and this has to come from both service provider and vendors. Single view of operations across the enterprise, 24×7 availability, aligning IT with business dynamics, resource optimization and utilization, integrated architecture ensuring secure infrastructure, data and applications and an IT framework that provides standardization, modularity and scalability and consolidation to address new initiatives are business imperatives.We have improved manageability and reduced operating costs by consolidating disparate systems and processes onto single centralized infrastructure. This has enabled easier rollout of new initiatives, brought about resource optimization and balanced utilization, increased productivity by automating workflow and triggering actionable system driven tasks. Main business benefits include increased productivity, process standardization, and transparency with suppliers and dealerships resulting in partnership approach, optimized inventory control, re-deployment of manpower and an infrastructure geared to support inorganic growth. As far as achieving goals are concerned we always aim to be better than what we are, and this is hence a continuing journey.Source:

 

 

June 1, 2007

Ensuring Data Integrity in SANs

Filed under: Articles, General News, Market Research, Storage, Uncategorized — Yogesh Hublikar @ 5:43 pm

With all organizational data moving into SANs, their security is becoming a growing concern. Here we look at a few technologies to make them more secure
   
 
 
 

Monday, June 04, 2007 SANs have numerous benefits in an enterprise setup, as they create an aggregated pool of storage for the organization. But such a storage pool that’s accessible to all may become a liability unless well thought out security policies are framed and made a part of the storage area network. Traditionally, SANs were deployed for a subset of a single data center, that is, a small isolated network and, therefore, were inherently more secure. But, today it is commonplace to find a SAN that spans outside a data center for business continuance and disaster recovery purposes. Moreover, with the advent of technologies such as iSCSI and FCIP, which use vulnerable TCP/IP for the transport, the need to secure SANs has become more evident. In this article, we’ll discuss SAN security. Understanding threats
When planning out the security for your SAN, you need to first identify the possible sources of the threats. These can be broken into three parts. One, of course, is the external threats like hackers or people with malicious intent trying to get in. Two, you need to control unauthorized access by internal users and should be able to detect any compromised devices; and last but not the least, your SAN should be able to deal with unintentional threats, like mis-configurations and human errors. Unfortunately, the third issue is the most ignored and minimal or no attention is paid to it. Just like in UNIX or Windows, where it’s prudent to minimize the use of root or administrator privileges; in a SAN also we should have strict control over access privileges granted to users.

Direct Hit!
Applies To: Storage Managers
USP: Secure your SAN
Primary Link: http://www.storagesearch.com
Google Keywords: Data integrity, SAN

In the SAN switches for instance, remove the operator privileges so that nobody has complete control, and use role-based authentication instead. Likewise, ensure that there are no overlapping domain Ids, which can otherwise result in configuration errors. A correctly configured switch can help prevent both deliberate and unintentional disruptions. Besides securing the SAN fabric, there are many other technologies available for securing the SAN better. Let’s have a look at them.Zoning
This is a method of creating barriers in the SAN fabric to prevent any-to-any connectivity. In zoning, you have to create different groups of servers and storage devices that are connected to the SAN fabric. Only devices within a particular zone can talk to each other through managed port-to-port connections. So if a server wants to access data from a storage device located in a different zone, the latter must be configured for multi-zone access.
SANs provide port-to-port pathways from servers to storage devices and back through bridges, switches and hubs. Zoning lets you efficiently manage, partition and control these pathways. Additionally, with zoning, heterogeneous devices can be grouped by operating systems, and further demarcation done based on applications, functions, or departments. Zoning is of two types. Soft zoning, which as the name suggests, uses software to enforce zoning. It uses a name server database connected to the FC switch. This stores port numbers and WWN (World Wide Names) to identify devices during a zoning process. If a device is put in a different zone, it gets a record of Registered State Change Notification (RSCN) in the database. Each device must correctly address the RSCN after a zone-change else all its communications with storage devices in the previous zone will be blocked. You can also have hard zoning, which only uses WWNs to tag each device. Here, the SAN switches have to regulate data transfers between verified zones. Due to this, hard zoning requires that each device pass through the switches’ routing tables. For example, if two ports are not authorized to communicate with each other, their route tables are disabled and hence, the communication between those ports gets blocked.While zoning is a good way to control access between various devices on a SAN, it cannot mask individual tape or disk LUNs that sit behind a device port. This can be done through LUN masking.LUN masking
This is a RAID-based feature that binds the WWN of the HBA (Host Bus Adapter) on the host server to a specific SCSI identifier, or LUN. Since zoning can’t mask individual LUNs behind a port, it can’t limit an application server to a specific partition on a RAID. LUN masking overcomes this restriction. Let’s say a single 24 GB RAID is divided into three 8 GB partitions to store data for the Finance, Production and Marketing departments. LUN masking, for example, could ‘hide’ the Finance and Marketing partitions, so that an application server can only see the Production department partition.
The problem with all this is that there’s no requirement for authentication. Although storage vendors are planning to support a wide range of authentication methods, the DH-CHAP (Diffie-Hellman Key Encryption Protocol-Challenge Handshake Authentication Protocol) is used for Fibre Channel Security Protocol (FC-SP), which addresses FC’s weak security.LUN masking can be done either at the RAID device level itself or at the server HBA. Here, though the former is more secure, it’s not always possible because all RAID devices don’t support this. That’s where the second method is used, through a process known as ‘Persistent binding’. This is nothing but letting the Operating System assign SCSI target IDs and LUNs through the device drivers of the host HBA. One way this works is that the host assigns a SCSI target ID to the first router it finds, and subsequently assigns LUNs to the SCSI devices attached to it. Operating systems and high-level applications, such as backup software, typically require a static or predictable SCSI target ID for their storage reliability and persistent binding provides the same.Shoring up the weak points
If you are adding a new switch to the fabric, then Access Control Lists (ACLs) are used to allow or deny their addition. Host-to-fabric security technologies use ACLs at the port-level of the fabric to allow or deny HBA of a specific host from attaching to certain port. So an intruder host can not just attach to any port on the fabric and access data without authority. ACLs are also used to filter network traffic, ie they can be used to allow or block routed packets from passing at the router interface. PKI can be used for authentication here. PKI and other encryption technologies like md5 can also be used on some of the switches for managing the entire fabric. All management and configuration changes are then passed to all the switches on the SAN from them.
This will also result into a SAN with a minimal number of security control points. Finally, configuration integrity is also very important. It ensures that
configuration changes in the fabric only come from one location at a time, and are correctly propagated to all switches in the fabric with integrity. The use of a distributed lock manager is one way in which you can ensure that a serial and valid configuration change is enabled on the fabric.
Data encryption
What if despite having all the security measures in place to prevent anybody from entering your SAN, somebody manages to get in? If all the data is sitting in plain text, then it’s all available to the hacker. In such a case, it becomes important to consider data encryption techniques. It may not be feasible to encrypt all the data sitting on the SAN, so you need to figure out which is the most sensitive data that needs to be encrypted. You might also need to encrypt certain data due to regulatory requirements.
While SAN vendors bolster their security, several companies are betting there’s a market for storage encryption. Many vendors have also introduced security appliances to encrypt data between the application server and the RAID. But, these products are new and have little or no track record in the real world. So, better wait for reviews to come.Virtual SANs
Thanks to the developments taking place in this direction, we have now something called VSANs. A virtual SAN (VSAN) is a logical partition of a SAN. It allows the traffic to be isolated within specific sections of the network. So it becomes easier to isolate and rectify a problem with minimum disruption. The use of multiple VSANs is said to make a system easier to configure and also more scalable. You can add ports and switches at your will. You can also try different permutations and combinations of ports, because it is all logically done, giving you more flexibility. VSANs can also be configured separately and independently, making them more secure. They also offer the possibility of data redundancy, thereby reducing the risk of catastrophic data loss.
Final words
It is unwise to expect that the required level of security can be achieved from any one of the above discussed technologies, alone.
Therefore, in a heterogeneous SAN environment, some combination or all of the aforementioned technologies could be employed to ensure a storage area network where data integrity is guaranteed. Finally, as the SAN infrastructure evolves and as new technologies emerge, the SAN security strategy must also be periodically worked upon by every organization.Manu Priyam

 
© Source: PCQuest  

Blog at WordPress.com.