Translate this website:
Search this website:


Blogs
Posted by Gigamon Blog on 16/04/2014
Putting the (Whitebox) Cart Before the (SDN) Horse?
By: Shehzad Merchant, Chief Strategy Officer at Gigamon  The network today is more critical to the success of IT than ever before. As such, any disruptive change in networking has to be one that is assimilated into the production environment using a measured and carefully phased approach. We are early in the SDN cycle and […]
To read more please go to Click Here
Read more »
Posted by Gigamon Blog on 13/03/2014
RSA 2014 Recap: The Year of Pervasive Security and Analytics
by: Neal Allen, Sr. Worldwide Training Engineer, Gigamon According to ESG research and Jon Oltsik, Sr. Principal Analyst at ESG: 44% of organizations believe that their current level of security data collection and analysis could be classified as “big data,” while another 44% believe that their security data collection and analysis will be classified as “big […]
To read more please go to Click Here
Read more »
Posted by Del Lunn, Principal Consultant at GlassHouse Technologies (UK) on 20/01/2014
 Spreading your data across the hybrid cloud – what data goes where?    

The term hybrid cloud is used to refer to a combination of public and private clouds that are tailored to suit an organisation’s specific business needs.  As a minimum, a single private cloud and single public cloud are all that are required to create a hybrid cloud computing platform. However, organisations can combine multiple private and public clouds to suit the business’s requirements.

 

A public cloud enables organisations to adopt enterprise class technologies for their environment at an affordable price point; however, security, availability, compliance, performance, portability and the cloud provider’s market longevity can often be of concern. A private cloud can answer these concerns but is more expensive to deploy and operate. A hybrid cloud offers both these benefits by integrating an organisations data and processing the data into the correct cloud.

 

This asks the question; with regards to the hybrid cloud, what data goes where?

 

Well it’s really a case of data classification and risk, writes Del Lunn, Principal Consultant at GlassHouse Technologies (UK).  When a company’s applications and data are moved from on premise platforms to a public cloud offering, the organisation will essentially be ‘renting’ services alongside other customers whilst entirely entrusting the provider and its staff with regards to data security, uptime of services, confidentiality, compliance and transition. All of which can have a catastrophic effect for some businesses if not adhered to or met.  Before considering public cloud offerings, companies will need to thoroughly understand the business impact and revenue loss that could occur from hosting data off-premise in the public cloud. Even if the aforementioned are of little relevance, companies looking to move to public cloud offerings must still proceed with caution; what if the cloud provider goes bankrupt? What is the relationship with the provider becomes toxic? What if they decide they no longer want to provide cloud services?

 

At present, the above normally dictates that an organisation’s ‘Crown Jewels’, i.e. enterprise, business critical, secure or regulatory data and applications, remain on-premise or within a secure private cloud environment whilst more commodity based or tactical services such as data archiving, backup, e-mail, collaboration and workspace recovery, are moved to a public cloud.  However, the continued maturation of public cloud service offerings is starting to challenge this principle and more progressive organisations are embracing a ‘cloud first’ approach to application deployment.

 

GlassHouse strongly believes that although public cloud offerings can often be cost effective, they must also be the correct fit for the organisation, striking the right balance between scalability, agility, compliance, performance, security and operational flexibility.

 

Posted by: Del Lunn, Principal Consultant at GlassHouse Technologies (UK)

 

GlassHouse provides vendor-independent data center consulting and managed services, guiding customers through the complexities of cloud, virtualization, storage, security and workspace.  We enable clients to evolve to a services-enabled data center model providing on-demand, elastic services and agility, and enabling IT to focus on innovation. We consider the people, processes, policies and technology in place and create a customized plan that mitigates risk, reduces cost and improves efficiency, driving business value rather than technology outcomes. For more information, visit www.glasshouse.com, visit the GlassHouse blog for expert commentary on key data center issues, and follow us on Twitter @GlassHouse_Tech.

Read more »
Posted by Del Lunn, Principal Consultant at GlassHouse Technologies (UK) on 20/01/2014
 Spreading your data across the hybrid cloud – what data goes where?  
 The term hybrid cloud is used to refer to a combination of public and private clouds that are tailored to suit an organisation’s specific business needs.  As a minimum, a single private cloud and single public cloud are all that are required to create a hybrid cloud computing platform. However, organisations can combine multiple private and public clouds to suit the business’s requirements.  

A public cloud enables organisations to adopt enterprise class technologies for their environment at an affordable price point; however, security, availability, compliance, performance, portability and the cloud provider’s market longevity can often be of concern. A private cloud can answer these concerns but is more expensive to deploy and operate. A hybrid cloud offers both these benefits by integrating an organisations data and processing the data into the correct cloud.

 

This asks the question; with regards to the hybrid cloud, what data goes where?

 

Well it’s really a case of data classification and risk, writes Del Lunn, Principal Consultant at GlassHouse Technologies (UK).  When a company’s applications and data are moved from on premise platforms to a public cloud offering, the organisation will essentially be ‘renting’ services alongside other customers whilst entirely entrusting the provider and its staff with regards to data security, uptime of services, confidentiality, compliance and transition. All of which can have a catastrophic effect for some businesses if not adhered to or met.  Before considering public cloud offerings, companies will need to thoroughly understand the business impact and revenue loss that could occur from hosting data off-premise in the public cloud. Even if the aforementioned are of little relevance, companies looking to move to public cloud offerings must still proceed with caution; what if the cloud provider goes bankrupt? What is the relationship with the provider becomes toxic? What if they decide they no longer want to provide cloud services?

 

At present, the above normally dictates that an organisation’s ‘Crown Jewels’, i.e. enterprise, business critical, secure or regulatory data and applications, remain on-premise or within a secure private cloud environment whilst more commodity based or tactical services such as data archiving, backup, e-mail, collaboration and workspace recovery, are moved to a public cloud.  However, the continued maturation of public cloud service offerings is starting to challenge this principle and more progressive organisations are embracing a ‘cloud first’ approach to application deployment.

 

GlassHouse strongly believes that although public cloud offerings can often be cost effective, they must also be the correct fit for the organisation, striking the right balance between scalability, agility, compliance, performance, security and operational flexibility.

 

Posted by: Del Lunn, Principal Consultant at GlassHouse Technologies (UK)

 

GlassHouse provides vendor-independent data center consulting and managed services, guiding customers through the complexities of cloud, virtualization, storage, security and workspace.  We enable clients to evolve to a services-enabled data center model providing on-demand, elastic services and agility, and enabling IT to focus on innovation. We consider the people, processes, policies and technology in place and create a customized plan that mitigates risk, reduces cost and improves efficiency, driving business value rather than technology outcomes. For more information, visit www.glasshouse.com, visit the GlassHouse blog for expert commentary on key data center issues, and follow us on Twitter @GlassHouse_Tech.

Read more »
Posted by The Hot Aisle on 02/01/2014
Predictions for 2014
I read my predictions for 2013 again today and feel that I did a pretty reasonable job of highlighting the key trends that actually did impact IT in the last 12 months.  I spent some time over the holiday period thinking deeply about what 2014 has in store for us and the new trends that […]
To read more please go to Click Here
Read more »
Posted by The Hot Aisle on 09/12/2013
IBM win huge Pure Flex Order from European Union
IBM’s strategy of integrating application orchestration into their converged infrastructure offer, PureFlex appears to be starting to pay off handsomely with a significant 6,100 system order from the European Union.  IBM have been focussing for some time on the quality end of the market maintaining a revenue leading position (number one by sales value) with […]
To read more please go to Click Here
Read more »
Posted by The Hot Aisle on 19/11/2013
Which enterprise technology startups have the most promising technologies?
London, 19th November 2013 - After a fantastic second outing for The Tech Trailblazers Awards, the enterprise information technology startup awards today announced shortlisted finalists across the nine categories: Big Data, Cloud, Emerging Markets, Infosecurity, Mobile, Networking, Storage, Sustainable IT and Virtualization. The second edition of the awards received an increase in both the quantity and quality […]
To read more please go to Click Here
Read more »
Posted by The Hot Aisle on 11/11/2013
Ron Bianchini’s Avere seamlessly integrates Glacier into the data centre
Want to integrate Amazon Glacier and S3 into your data centre NAS environment? This is the best solution I have seen so far.
To read more please go to Click Here
Read more »
Posted by The Hot Aisle on 06/11/2013
IBM PureSystems stands up complex converged platform in less than 20 days
I received a very interesting press release from IBM about a new PureSystems installation in South America that is worthy of comment because it emphasises one of the key value propositions of converged platforms, agility. Converged, integrated systems can be configured and delivered extremely quickly and in point of fact the IBM platform is particularly […]
To read more please go to Click Here
Read more »
Posted by Paul Grimwood, Technical Consultant at GlassHouse Technologies (UK) on 15/10/2013
Am I better off using array-based replication or LAN-based replication for SRM?

VMware Site Recovery Manager (SRM) is a disaster recovery orchestration product which protects virtual machines by duplicating them in a secondary site. This is achieved using either storage array or network based replication, reports Paul Grimwood, Technical Consultant at GlassHouse Technologies.

The decision on which replication technology to use should be determined by business requirements; specifically the Recovery Point Objectives (RPO) defined in the disaster recover service level agreement. These requirements should be matched to the capabilities and scalability of the replication technology and balanced against costs and other considerations.

Array based replication (ABR) uses a SRM storage adapter to leverage the replication and snapshot capabilities of the array. This allows for a high performance, synchronous or asynchronous replication of large amounts of data. Where a SLA demands a low RPO – a minimal loss of data - ABR remains the preferred option due its synchronous replication capability. However, this performance comes at a cost; storage infrastructure from the same vendor is required at both sites and generally features like array replication and snapshots incur additional licensing costs.

Alternatively SRM can leverage the vSphere replication (VR) feature, incorporated into vSphere5.1, which utilises the hypervisor to replicate over the network on a per VM basis. This approach offers more flexibility as it allows replication between disparate storage and is storage protocol independent. So low end, even direct attached, storage or cloud infrastructure can be used in the failover site to reduce costs.  SRM with VR supports features such as failback and re-protect which were previously only available with ABR. Microsoft VSS can, additionally, be used to quiesce application data during replication passes to ensure data consistency. Also, multiple points in time recovery allows a roll back to a known consistent state.

A disadvantage of using VR is the comparatively lower performance. At best, a 15 minute RPO compared to the synchronous replication possible with ABR. Due to this limitation VR is not suitable in situations requiring minimal data loss, for example, with a database tier. Instead VR would be suited for use with more static systems, such as a web application server tier. There is also a performance impact on the host whilst running replication as well as limits on the total number of replica VMs that can be supported (500 as opposed to 1000 with ABR). Certain features such as linked-clones, physical mode RDMs and Fault Tolerance are not supported but this may be addressed in the future.

It is possible to combine the two technologies, within supported limits and considering the resulting RPO, should it be desirable to do so. For example, small branch sites could use VR replication to a main site which is then protected using ABR replication. Or, certain VMs could be replicated, using VR, to a cloud provider as well as to an ABR linked site.

To summarise, VR is simpler, more flexible and cheaper than ABR but at the expense of reduced performance, scalability and feature support. The decision on which technology to use, or whether to combine the two, will be determined by business requirements as well as any existing investments.

 

Posted by Paul Grimwood, Technical Consultant at GlassHouse Technologies (UK) LTD

GlassHouse provides vendor-independent data center consulting and managed services, guiding customers through the complexities of cloud, virtualization, storage, security and workspace.  We enable clients to evolve to a services-enabled data center model providing on-demand, elastic services and agility, and enabling IT to focus on innovation. We consider the people, processes, policies and technology in place and create a customized plan that mitigates risk, reduces cost and improves efficiency, driving business value rather than technology outcomes. For more information, visit www.GlassHouse.com, visit the GlassHouse blog for expert commentary on key data center issues, and follow us on Twitter @GlassHouse_Tech.

 

 

Read more »
Posted by Del Lunn, Principal Consultant at GlassHouse Technologies (UK) on 15/10/2013
What’s the best way to take advantage of virtualisation technology for DR?
Can you remember how disaster recovery (DR) used to be achieved for companies before virtualisation technologies were available, asks Del Lunn, Principal Consultant at GlassHouse Technologies (UK).

For a company’s server estate it normally meant deploying an expensive  identical redundant physical server at a DR site with a ‘hot standby’ 1:1 mapping of server hardware,  typically the server needed to be the same brand, model, configuration (CPU and RAM), firmware version, operating system version and patch level.  Most of the time the servers and applications in the DR site also required access to physical storage arrays and tape backup/restore facilities. In the event of a disaster, the storage data and server apps would need to be quickly recovered using notoriously unreliable data tapes, disk-based backup, or via real-time replication. Of course in the ‘Physical’ world the reality was that DR was rarely invoked, required many complex procedures, took a long time to plan and complete and had huge capex and opex costs associated for manpower, hardware and facilities.

 

So, what’s the best way to take advantage of virtualisation technology for DR?

 

Most organisations have now made use of hypervisors such as VMware vSphere or Microsoft Hyper-V which enable a single physical server to run multiple virtual servers on top of the hypervisor. Hypervisors can eliminate hardware compatibility problems, VMs run on virtual hardware that is emulated by the hypervisor and there are no hardware compatibility issues caused by differences in the underlying physical hardware.  From a Disaster Recovery perspective a hypervisor platform deployed at a primary site can site can easily be expanded and built upon with a subset of the virtualisation infrastructure components deployed at a secondary site. When deploying the virtualisation platform at both the primary site and the secondary sites, organisations can use different physical hardware without having to deal with compatibility issues. This portability means the heavy burden of a 1:1 physical server mapping between the production and DR sites is eradicated. Organisations are then able to replicate virtual machines using storage replication or in built virtualisation replication to the hypervisor platform at the secondary DR site. Replication will provide a mirror copy of the virtual machines from the primary site to the secondary DR site allowing the second site to take over in a DR scenario.  The actual failover of virtual machines from the primary to the secondary site in the event of a disaster can be a manual, semi-automated or fully-automated process based on vendor and operation toolsets which use scripting and orchestration. Failover testing and recovery can also be further enhanced  using bolt on automation products like VMware Site Recovery Manager(SRM), which can build complex recovery plans that can test or invoke DR failover and failback at the push of a button.

 

In short the main advantage of using a virtualised platform for disaster recovery is the portability of virtual machines being decoupled from the underlying hardware which allows for a far more flexible DR capability to be realised, reducing costs, providing increased flexibility, and ensuing recovery point objectives (RPO) & recovery time objectives (RTOs) can be met when compared to traditional DR methods.  Glasshouse recommend that if organisations have not already done so they review their current DR methodologies, procedures and capabilities to understand how they can leverage their investment in virtualisation technologies to provide cost effective, flexible DR platforms.

 

 

Posted by: Del Lunn, Principal Consultant at GlassHouse Technologies (UK)

 

GlassHouse provides vendor-independent data center consulting and managed services, guiding customers through the complexities of cloud, virtualization, storage, security and workspace.  We enable clients to evolve to a services-enabled data center model providing on-demand, elastic services and agility, and enabling IT to focus on innovation. We consider the people, processes, policies and technology in place and create a customized plan that mitigates risk, reduces cost and improves efficiency, driving business value rather than technology outcomes. For more information, visit www.GlassHouse.com, visit the GlassHouse blog for expert commentary on key data center issues, and follow us on Twitter @GlassHouse_Tech.

 

Read more »
Posted by Andy Wragg, Senior Consultant at GlassHouse Technologies (UK) on 04/10/2013
Are converged network adapters the end of Fibre Channel?

Andy Wragg, Senior Consultant at GlassHouse Technologies, explains that Converged Network Adapters (CNAs) are server hardware adapters which combine the data transfer functionality normally provided by traditional network cards and storage host bus adapters. CNAs utilise the Fibre Channel over Ethernet (FCoE) protocol, to allow storage data to be transferred in an IP packet, using the same physical network as that used by your LAN traffic.

There are a number of benefits to this approach;

  • Allows a single method of cabling to be used throughout the data centre, thus simplifying the number and type of cables being used
  • Provides a more robust physical medium (copper wires over glass fibres).  
  • The number of switches is reduced as the 2 networks (LAN/SAN) converge into a single fabric, hence the nomenclature.

CNAs are typically (but not exclusively) found in large virtual server environments to help reduce infrastructure and management costs.

Despite the benefits above, native FC still has some benefits over FCoE.

Firstly, native FC is typically already in existence in the data centre. In order to use CNA’s and FCoE one must replace all the existing (LAN) network hardware with devices that support the Data Centre Bridging (DCB) enhancements. This is a major infrastructure project in its own right, and the costs should not be underestimated.

Secondly, FC is routable, whereas FCoE (which operates at the IP layer) is not, and there are no plans to make it so. You must use FC-IP to extend or join FCoE SANs even over short distances.

Thirdly, the maximum distance you can run FCoE over is 3km, although this can be increased using FCIP. Using long wave transceivers, you can operate FC over tens of kilometres.  

Lastly, only a few storage vendors have systems with native FCoE interfaces. This will change over time, but if you have legacy FC arrays in your environment which only support FC you’ll need some way of attaching them to the converged network.

The move to a converged network within the data centre is compelling. However given the constraints above, if your second datacentre is more than 3km away and you need to replicate synchronously between them, native FC connectivity is what you’ll need to implement to achieve your aims. Fibre channel is far from being a dying technology.

If you need help in deciding which storage networking technologies are right for you, GlassHouse has the knowledge and experience to help you make those choices effectively.

For more information please visit our website www.GlassHouse.com

Posted by: Andy Wragg, Senior Consultant at GlassHouse Technologies (UK)

Email: Andy.Wragg@GlassHouse.com

Read more »
Posted by Del Lunn, Principle Consultant at GlassHouse Technologies (UK) on 04/10/2013
Can I migrate all my public cloud data back to on-premise storage and if so what are the potential challenges?

With the adoption of cloud computing, and more specifically cloud storage, companies can look to reduce the cost and complexity of their applications and associated data. The economies of scale and shared infrastructure allow service providers to deliver cloud-based storage at very low price points compared to that of traditional on-premise storage platforms. Currently the popular cloud storage use cases tend to be infrequently accessed data scenarios such as archiving, backup, DR, and offsite data protection.

                       

But what happens when a company decides it wants to move its data and applications back to the on-premise storage platform, writes Del Lunn, Principle Consultant at GlassHouse Technologies, and what are the challenges?

 

  • First of all with the event of public cloud it is entirely probable that a company will have data hosted with various cloud vendors in a hybrid or multi cloud model. Hybrid models however tend to bring different contractual obligations and interoperability issues with the need to deal with different SLA’s, toolsets, API's, management infrastructure etc. So migration back to the on-premise storage may well be a challenge in terms of dealing with several vendors, leveraging API’s and tooling for each different service provider to enable the data migration.

                                                                              

  • Secondly the likelihood is that any migration of cloud data back to the on-premise storage platform will take place over the internet. Any data hosted with the cloud service provider may well have grown substantially over the course of the contract (many large enterprises deal with petabytes of data) and this data may have been accumulated from many locations or clients over many different internet sources. Dependant on the size of the data being migrated there may be a real challenge in terms of bandwidth, timescales and subsequent cost for the migration of the data back to the on-premise storage platform.

 

  • Finally if the above hurdles can be overcome, a major challenge for the company will be where to locate the migrated data and ensure that the same feature sets and level of service can be attained as in the cloud. Each cloud provider will have developed specialist toolsets and invested in hardware and software components at economies of scale that the end user company will probably find impossible to match. The likelihood is that there will be a large capex cost attributed to moving back to on-premise storage as an investment will be required to the on-premise storage platforms, software and toolsets to meet the capacity, feature sets and service that has been provided by the cloud service providers.

 

In a nutshell once data has been transitioned to a cloud service provider it may or may not be technically feasible to migrate it back to on-premise storage based on the individual scenario. However, organisations are certainly going to have to find a compelling business case as the cost of the on-premise storage, toolsets and migration services will almost certainly provide for a less efficient, less flexible and cost prohibitive solution when compared to providing data storage in the cloud.

 

GlassHouse recommend that before companies migrate data to public cloud storage offerings they thoroughly understand the ramifications of transitioning data back to on-premise storage or to another cloud provider. A water tight ‘Prenuptial Agreement’ should be in place outlining both the commercial and technical requirements for transitioning back the company’s data, otherwise it could become very messy, just like a bad divorce.

 

If you would like more information please visit our website www.GlassHouse.com

 

Posted by: Del Lunn, Principle Consultant at GlassHouse Technologies (UK)

Read more »
Posted by Jamie McGinty on 23/09/2013
What is the best enterprise / centralised storage solution for VDI environments?  

This is a question often asked, writes Jamie McGinty Data Centre Technologies Line of Business Manager at GlassHouse Technologies (UK).

Virtualised Desktops or VDI has now been around for a number of years and is a game changing technology, allowing organisations to provide complete standardisation to the desktop environment as well as giving IT Admins greater control over the desktop estate. However, a major drawback for the adoption of VDI has seen in recent years is the CAPEX costs, particularly around storage; as  one of the most important elements to any successful VDI deployment is to ensure that the user experience (performance) is as good as, or preferably exceeds, the physical desktop, as traditional desktop OS’s expect dedicated local resources. To help achieve this, a significant amount of IOPS (Input / Output per Second, or reads and writes) are required. This can be achieved with traditional storage by creating arrays with lots of disks to achieve the IOPS required for the number of virtual desktops that will be available. The main issue with this approach is scalability and cost of the solution. As the demand for more VDI sessions increases as does the need to increase the IOPS to maintain an acceptable level of performance. This means more disks and high end storage platforms which will, very quickly start to become cost prohibitive.

 

This brings us to the next option which is Solid State Disk (SSD). Although SSD storage can provide the required IOPS with fewer disks (Solid State Storage Units), it is currently still an expensive option. Organisations can feel that a VDI initiative based on SSD storage is too expensive to be viable when compared to the cost of simply replacing the desktop estate with new hardware.

 

So what is the best option? Put simply – RAM! Yes, good old fashioned memory! The last couple of years have seen software companies address this problem by creating software solutions that use server RAM as storage for running virtual desktop sessions. Given that one of the biggest elements to achieving the desired user experience is IOPS, server RAM can provide this in abundance. The software often comes in the form of a virtual appliance which not only allows the provision of RAM as datastores on the virtualised platform but also manages the RAM allocation on virtual host servers that dedicated the provision of virtual desktop sessions. This solution provides both the user experience at a cost per desktop that can be lower than a physical equivalent and is also very easy to scale by simply adding further host server to the VDI pool.       

 

To find out how GlassHouse Technologies storage expertise can improve your companies VDI performance and user experience contact GlassHouse.com

 

Posted by Jamie McGinty, Data Centre Technologies Line of Business Manager at GlassHouse Technologies (UK) LTD

Email: Jamie.McGinty@GlassHouse.com

Read more »
Posted by Hywel Matthews on 18/09/2013
How do I know if I am DR Ready?

The fear that business services – or indeed the business itself – might not be recoverable after a disaster level event results in many sleepless nights for CIO’s across the world. 

But it doesn’t need to be that way, reports Hywel Matthews Principal Consultant at GlassHouse Technologies (UK).

 

Disaster recovery (DR), a subset of ‘Business Continuity’, is the process, policies and procedures required for the recovery or continuation of technology infrastructure after a disaster level event. Disasters come in multiple forms and may be highly unpredictable in nature but the impact they have on your business can be calculated and mitigated against through robust preparation and testing.

 

So how do you avoid those sleepless nights?  - You prepare for the worst case scenario. Imagine if you lost everything in and around your primary data centre facility.

There are four key questions that need to be answered to have confidence in your DR solution:

 

  1. Can my DR infrastructure be affected by the same disaster that destroyed my primary data centre?
  2. Has my DR solution been designed and built to ensure adherence to application recovery objectives (RPO & RTO)?
  3. Are my DR processes documented, accessible and understood by everyone that might have to follow them?
  4. Can users connect to applications that are running from DR? (remember the primary DC and associated networks may not be there any more)

 

 

While an audit that answers the above four questions will help, the only way to be sure that an organisation is DR ready is by testing, testing and more testing. DR tests are scary and expensive, but are invaluable exercises which assess both the DR solution and the people that execute it.  Regular DR tests alleviate the stresses associated with the unknown and regular practice will reduce time to recovery in disaster scenarios.

 

 

Visit us online and review your existing DR approach with the GlassHouse Best Practise Guidelines at www.GlassHouse.com.

 

Posted by Hywel Matthews, Principal Consultant at GlassHouse Technologies (UK) LTD

Email: Hywel.Matthews@GlassHouse.com

Read more »
Posted by Surjit Randhawa, Senior Consultant at GlassHouse Technologies (UK) on 18/09/2013
Why do many organisations consider storage in a VMware infrastructure to be a bottleneck and how can this be addressed?

The first consideration to this question should be ‘is the storage the actual bottleneck’, writes Surjit Randhawa, Senior Consultant at GlassHouse Technologies (UK).

 

Many VMware performance issues are blamed on storage when the real problem lies elsewhere.  For the sake of this question, however, we can assume that the bottleneck is the storage, there would be three main causes of the problem:

 

  • Storage not fit for purpose / poorly performing
  • Misconfiguration of the storage
  • Overloading the storage with too many devices

 

All three of the issues above can be attributed to either unrealistic expectations of what can be achieved or a misunderstanding of how the VMware environment uses the storage.  VMware Profile Driven Storage can help alleviate these bottlenecks by matching Virtual Machine (VM) performance requirements with the appropriate datastore

 

Organisations should think of the IO profiles of their VM workloads and what SLAs are required for them, i.e. which storage type should VMs requiring high I/O reside on, how to differentiate between different workloads/applications, etc.

 

By configuring Storage Profiles, administrators know exactly which profile to select when creating/migrating a virtual machine.  Administrators can therefore check for profile compliance and realign any workloads that do not match the required policy.  By constructing datastores on different types of underlying storage, a virtual machines’ VMDK (disk file) can reside on the most appropriate storage.  Furthermore, different VMDKs for a virtual machine can reside on different datastores to provide the best level of performance and scalability for individual volumes.

 

An example of Storage Profiles is shown in the table below where workloads can be characterised by their demand on I/O. 

 

 

Profile

Data Store

Disk Type Description
Gold

Gold_01, 02, 03….

SSD / FC

Workloads that require high performance

Silver

Silver_01, 02, 03….

SAS / iSCSI

The most common workload, use for system and data drives

Bronze

Bronze_01, 02, 03….

SATA

Non-critical/reference servers hosting archive data with minimal access

 

Visit us online and review your existing VMware approach with the GlassHouse Best Practise Guidelines at www.GlassHouse.com.

 

 

Posted by Surjit Randhawa, Senior Consultant at GlassHouse Technologies (UK) LTD

Email: Surjit.Randhawa@GlassHouse.com

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Read more »
Posted by David Boyle, Senior Consultant at GlassHouse Technologies (UK) LTD on 18/09/2013
When should I consider using SSD instead of SAS or SATA?  

The criteria for deploying Solid State Drives (SSDs) are no different to choosing any other disk technology, reports David Boyle, Senior Consultant at GlassHouse Technologies. It should be based on matching the application requirements based on three primary factors: cost, performance and capacity.

  • The cost of SDDs is much higher than that of SAS (approx. 10x), which is in turn greater than SATA.
  • The sustainable IO performance of SSDs can be 40 to 60 times greater than that that of SAS and SATA respectively.
  • The standard capacity of SSDs has increased to 600GB, but remains much lower in comparison to SAS (900GB) and SATA (3TB)

SSDs are therefore of interest when an application’s performance requirements outweigh those of cost and capacity, being especially well suited for applications (often the most business critical) with the highest random read and write requirements.  In some cases organisations have dramatically reduced data centre footprint and storage costs, not to mention power and cooling needs, by replacing hundreds of SAS drives with a much smaller number of strategically placed SSDs

SSDs should therefore be an important consideration for any organisations tiered storage strategy, although in order to extract the greatest value from them it is important they are deployed in conjunction with the vendor’s thin-provisioning and automated-tiering products. Thin Provisioning can greatly improve the utilisation of the SSD capacity, and avoid costly ‘white space’ as only actual data is written to the drives.  Automated tiering promotes data that requires the greatest performance to SSD, and relegates less demanding data to SAS or SATA. This use of different drive technologies together with thin provisioning and automated tiering maximises the effective use and capacity of the SSDs without the need for operator intervention.

Visit us online and review your existing storage approach with the GlassHouse Best Practise Guidelines at www.GlassHouse.com.

 

Posted by David Boyle, Senior Consultant at GlassHouse Technologies (UK) LTD

Email: David.Boyle@GlassHouse.com

Read more »
Posted by Shehzad Merchant, Chief Strategy Officer at Gigamon on 12/09/2013
Enabling Multi-tenancy within Enterprise IT Operations

Multi-tenancy is a well understood term in cloud and carrier environments where multiple customers serve as tenants over a common infrastructure. However, the notion of multi-tenancy, the associated SLAs for each tenant, and the ability to virtualize the underlying infrastructure to isolate individual tenants, is quicklymaking its way into enterprise IT operations. Today, enterprise IT organizations have multiple departments such as security, networking, applications, among others. Each department is increasingly being held to stringent requirements for ensuring network and application availability, responsiveness, and a good user experience. This is leading to an increasing reliance on various classes of tools that provide the ability to monitor and manage the applications, network, security, as well as user experience.  Many of these tools leverage Gigamon’s Visibility Fabric™ for optimal delivery of traffic from across physical and virtual networks to these tools. As departments are increasingly held to their own SLAs and KPIs, they need to be able to autonomously carve out traffic delivery to the departmental tools, as well as independently configure, manage, and adapt traffic flows to the departmental tools without impacting other departmental traffic flows. And they need to be able to do all of this over a common underlying Visibility Fabric, which leads to a model where the Visibility Fabric needs to support a true multi-tenant environment.

With the GigaVUE H Series 3.1 software release, Gigamon introduces several enhancements to the Visibility Fabric that enable multi-tenancy and enable IT departments to optimize their workflows, reduce workflow provisioning times and provide for both privacy as well as collaboration among departments when it comes to their monitoring infrastructure.

There are three key aspects to these new capabilities.

  1. Enabling departments to carve out their own slice of the Visibility Fabric using an intuitive Graphical User Interface (GUI) that supports the workflow required for multi-tenancy. Empowering multiple tenants to apportion the Visibility Fabric each with their own access rights, sharing privileges and their traffic flows, through a drag and drop GUI-based model is a key step towards simplifying the provisioning model in a multi-tenant environment. Moving away from a CLI based approach to a GUI based approach is a key step towards improving workflows across departmental silos.
  2. Advancing Gigamon’s patented Flow Mapping® technology within the Visibility Fabric Nodes to support multi-tenancy whereby each tenant can carve out their own Flow Maps, ports, and actions, without impacting the traffic flows associated with other tenants. This is a significant architectural advancement that builds on Gigamon’s existing Flow Mapping technology to provision resources within the underlying visibility nodes based on the department’s (or tenant’s) requirements.
  3. Providing role based access control (RBAC) so that departmental users can work both collaboratively as well as privately over the common underlying Visibility Fabric.

These capabilities represent a significant advancement in how IT operations can take advantage of the Visibility Fabric to rapidly deploy new tools, enable real time or near real time tuning of the Visibility Fabric and better meet their individual SLAs and KPIs. Taken together, these key capabilities empower IT organizations to provide Visibility as a Service to their various departments.

For more information, please see the Visibility as a Service Solutions Overview.

Read more »
Posted by Gigamon Blog on 11/09/2013
Enabling Multi-tenancy within Enterprise IT Operations
by: Shehzad Merchant, Chief Strategy Officer at Gigamon Multi-tenancy is a well understood term in cloud and carrier environments where multiple customers serve as tenants over a common infrastructure. However, the notion of multi-tenancy, the associated SLAs for each tenant, and the ability to virtualize the underlying infrastructure to isolate individual tenants, is quickly making […]
To read more please go to Click Here
Read more »
Posted by The Hot Aisle on 19/08/2013
GreenBytes Named to CRN Magazine’s 2013 Top Emerging Virtualization Vendors List
Providence, R.I. – August 19, 2013 – GreenBytes®, Inc., a developer of full-featured virtual desktop optimization solutions that uniquely support existing infrastructure, today announced that it has been named a 2013 Emerging Virtualization Vendor by UBM Tech Channel’s CRN Magazine.  The annual CRN list highlights hot tech startups making an impact on the channel and […]
To read more please go to Click Here
Read more »
Posted by The Hot Aisle on 17/08/2013
The world will never be the same again
Sometimes it is a single gradual and insidious technology that creeps up on us, more often it is the intersection of multiple innovations that (unnoticed) creates a tipping point and then everything is changed. The data center will never be the same again. Here are the colliding technologies that come together and change everything forever: […]
To read more please go to Click Here
Read more »
Posted by Gigamon Blog on 24/07/2013
Is OpenFlow Going Down the Path of Fiber Channel?
by: Shehzad Merchant, Chief Strategy Officer at Gigamon The promise of OpenFlow is open, standardized networking. However, recent trends suggest that OpenFlow deployments are straying away from that promise and moving towards end-to-end lock-in, much like the days of fiber channel. Today, if you take an OpenFlow-enabled switch from one vendor, an OpenFlow controller from […]
To read more please go to Click Here
Read more »
Posted by Gigamon Blog on 12/07/2013
Visibility in Motion at Cisco Live Orlando
by: Huy Nguyen, Senior Director of Product Management at Gigamon Now that Cisco Live Orlando has come and gone and we’re gearing up for VMworld, we’re seeing even more attention being paid to virtualization given the interest in software defined networks (SDN) and data centers (SDDC). So, virtualization remains hot and with around 60 percent […]
To read more please go to Click Here
Read more »
Posted by Trevor Dearing, EMEA Marketing Director, Gigamon on 11/04/2013
Visibility in the Virtual World
Everything starts with an idea; every new business process, new service or new product begins with someone saying, “I have an idea”.”  Ideas are great, but if they cannot be practically, quickly and economically implemented, they stay just ideas.  

Let’s look at the idea that moving to a more virtualized environment in the data centre can speed up the roll out of new applications, which, as a concept, no one really disagrees with.


The only reason that this idea works is that new developments in server, network and storage technology have made this concept a reality. Every year, in step with Moore’s law, performance of servers increases and soon will support many more virtual machines than before. Solid-state disk drives can give the applications much faster access to dynamic tiered storage via Ethernet and tying the whole thing together are new 10Gb and soon 40Gb and 100Gb networks.


Management always proves a challenge, because there are so many things that need to be watched. Are the servers performing as they should, is the network causing any problems, are all questions that need answering. This traffic needs to be delivered to the monitoring tools in its most efficient form; we cannot afford to drop any of the key information. This means that we cannot pass this traffic through the existing data network, as the network may be the cause of the problem.


This challenge was recognized some years ago and a new technology was created to solve this. The idea was pretty simple; build a parallel network specifically to manage all of the monitoring traffic and deliver it to the tools. The complexity comes when you start to realize that traffic taken from multiple points in the network could be duplicates, or conversely, it could be a mix of different types of traffic, not all of which is of interest to the various tools.


This means that this monitoring network needs to de-duplicate the traffic and sort it so that the right traffic goes to the right place. Thus, the visibility fabric was born; a distributed network that can deliver the right traffic to the right tool at the right time in the right format. This means that visualizing the traffic on the network and within the system becomes much cheaper, can be centralized and becomes much more efficient.

Read more »
Posted by Junipers Blog on 10/04/2013
Belgacom Selects Juniper Networks 100GE Routing Solution to Scale Its Cloud-Based and Multimedia Services
Belgacom Selects Juniper Networks 100GE Routing Solution to Scale Its Cloud-Based and Multimedia Services

BRUSSELS, BELGIUM--(Marketwired - Apr 10, 2013) -   Juniper Networks (NYSE: JNPR), the industry leader in network innovation, today announced that Belgacom, the largest telecommunications provider in Belgium, has deployed Juniper Networks® T Series Core Routers to ready itself for the future of Internet applications and bandwidth growth, demanded by its customers. Juniper Networks T4000 allows Belgacom to embrace the rapid adoption of new cloud applications, the explosion in online video traffic, the introduction of IPv6 and the widespread use of smart-phones and tablets and prepare the core of its network for the next decade.

News Highlights

The highly resilient network offers fast Internet access to residential subscribers, business customers, mobile, data centers and content platforms and will address the anticipated exponentially growing bandwidth demand for the next few years. 

Juniper provides the unique capability of seamlessly upgrading an existing T640 or T1600 router to a T4000 router and beyond while using the same operating system, Junos®, to accommodate the changing requirements of service providers without compromising OPEX.

T Series routers enable service providers to deliver to their customers stringent Quality of Service and meet Service Level Agreements for multiservice transit and IP services.

Supporting Quotes

Belgacom

With the introduction of the Juniper Networks T4000 Series in the Belgacom peering infrastructure, we aim at the realization of a highly available, fully redundant infrastructure. The realized architecture consists of two fully geo-redundant peering infrastructures being able to carry the total traffic generated by Belgacom customers. This infrastructure will allow Belgacom to drastically improve service availability and offer full resiliency to Internet applications, which in the last few years have become very critical for our customers.

- Johan Luystermans, vice president, network engineering and member of management committee, Belgacom

Juniper Networks

With the rapid subscription and data growth in cloud-based services, social networks and video, service providers require solutions that can provide superior flexibility and scale with the network intelligence needed to effortlessly adapt to new requirements. Junipers T Series platform will support Belgacoms advanced networking goals to provide its customers with new high-demand services while increasing Belgacoms average revenue per subscriber and reducing its total cost of ownership.

- Mike Marcellin, senior vice president, strategy and marketing, Platform and Systems Division, Juniper Networks

Additional Resources:

Juniper.Net Community: www.juniper.net/community

About Juniper Networks

Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter and Facebook.

Juniper Networks and Junos are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The Juniper Networks and Junos logos are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners.

Copyright © 1996-2011 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Junipers Blog on 04/04/2013
Juniper Networks Unveils the Worlds Most Programmable Core Switch to Support Emerging Applications and Growing Workloads
Juniper Networks Unveils the Worlds Most Programmable Core Switch to Support Emerging Applications and Growing Workloads

SUNNYVALE, CA--(Marketwired - Apr 4, 2013) - Juniper Networks (NYSE: JNPR), the industry leader in network innovation, today unveiled three new products designed to boost business agility and simplify network management. A massive spike in BYOD, mobile users and new enterprise application deployments has created network congestion and complexity. Compounded with the challenge of contracting IT budgets, these trends have severely restricted ITs ability to quickly respond and maintain network reliability.

Many enterprises use single purpose legacy infrastructures, built in silos and defined by their location in the enterprise, that cant keep pace with rapid changes in compute, storage and application requirements and often inhibit service delivery. In order to maximize delivery and availability of next-generation cloud services, server virtualization and rich media applications, Juniper introduces a new agile, programmable network. This will enable network operators to respond to business changes and monitor and react to how the network meets application service level agreement (SLA) requirements in a matter of seconds, not days or weeks.

Juniper Networks has addressed these challenges with a series of new products for enterprise campus and data center infrastructures. The new Juniper Networks® EX9200 Programmable Switch enables accelerated response to changing business needs, while its built-in ability to support a virtual WLAN controller, the Juniper Networks JunosV Wireless LAN Controller, will deliver high levels of reliability and flexibility across the enterprise to lower capital expense. The Junos® Space Network Director provides a single-pane-of-glass network management solution for wired and wireless LANs and data centers, consolidating multiple management tools to simplify network operations and deliver a comprehensive advanced platform that prepares enterprises for tomorrows applications, services and workload demands.

News Highlights

Industry Leading Programmability - To prepare enterprise campuses and data centers for new technologies and business requirements, the EX9200 provides the industrys highest level of programmability required for emerging applications and environments. Built upon the Juniper One Programmable ASIC, the EX9200 prepares enterprises for emerging Software-Defined Networking (SDN) protocols, allowing for network automation and interoperability without the need for additional hardware. This eliminates cost, complexity and time delays from line card upgrades or new infrastructure deployments, empowering enterprises to be more nimble and efficient. In enterprise campus and data center deployments, the EX9200 Virtual Chassis technology simplifies the network architecture, reducing network devices and layers of up to fifty percent. The EX9200 will ship with 1/10/40GbE interfaces and it is scheduled to deliver 100GbE performance this year. As with other next-generation offerings from Juniper Networks, the company will provide a Getting Started program to enable enterprises to evolve from their current infrastructure to the EX9200 platform.

Wireless LAN Controller Delivered as a Service for the Enterprise - Addressing the need to support the growing trend of BYOD and seamless integration between wired and wireless networks, Juniper Networks is the only vendor to deliver a flexible virtual WLAN controller designed to run on any combination of physical appliances, on a virtual machine (VM), or directly on Juniper Networks switches (future). Juniper has made wireless controller functionality a service on the network while offering consistent, industry-leading capabilities such as controller clustering, in-service software upgrades, self-organizing adds, moves and changes, and local switching across the portfolio. At the same time, none of these implementations is tied to a specific location within the enterprise. This simplifies network deployments, increases flexibility, reduces costs with a pay-as-you-grow model and delivers high reliability for wired and wireless networks. Enterprises dont need to have multiple islands of networks within their infrastructure and can instead have a common, unified end-to-end network for user access -- whether its wired or wireless.

Managing the Network from a Single Pane - Increasing network complexity makes network behavior unpredictable and difficult to forecast the effects of network changes. Junos Space Network Director, a single campus-to-data center management tool, provides a holistic view into the enterprise network. By administering the wired and wireless infrastructure, users, applications, and services from a central location, Junos Space Network Director accelerates application deployment time and reduces complexity and operational expenditures. What has previously been delivered as three to four different management tools can now be consolidated into a single application.

Supporting Quotes

Three, Ireland
Data centers are evolving into highly virtualized environments, and switches must be flexible enough to fill multiple roles. Junipers EX9200 is a programmable, resilient and scalable platform that delivers a rich set of features and interface options that will allow Three Ireland to accelerate and enhance our service delivery while greatly improving the customer experience.
- David Hennessy, chief technical officer, Three, Ireland

Pacific Northwest GigaPOP
Junipers EX9200 dramatically simplifies how we provide cost-effective, reliable, high-bandwidth and high-capacity networking to our research and education participants, as well as faculty, students and staff. The advanced programmability of the EX9200 delivers future-ready capabilities that can be easily adapted to support new requirements created by emerging applications, such as SDN and EVPN, while automation features allow us to transform network operations, reducing complexity and overhead.
- Schyler Batey, lead network engineer, Pacific Northwest GigaPOP

InfoGuard AG
Reducing complexity and cost are essential for the success of our business. The Junos Space Network Director provides our IT staff an end-to-end unified management application to administer our network infrastructure, users and applications from a single interface. This simplification increases the security and reliability of our infrastructure and removes the complexity of operating multiple management tools, which reduces operation expenditures while improving our ability to quickly react to business needs.
- Thomas Meier, chief executive officer, InfoGuard AG

Channel Partner: Copper River
Our customers need agility to manage virtualized and increasingly unpredictable bandwidth workloads, while being able to scale for future requirements. The Juniper Networks EX9200 programmable switch is uniquely positioned for this upcoming Layer 2 services refresh. It provides our customers a programmable core chassis with highly dense 10GbE and 40GbE configurations, all managed by Junos. Moreover, the EX9200 is future-ready with its ability to support 100GbE, SDN capabilities, and multiple automation options, providing a highly customizable, agile, infrastructure to customers.
- Joe Kim, vice president of engineering, Copper River

Channel Partner: Torrey Point
The EX9200 comes at a perfect time for our customers, some of whom are in the process of data center upgrades. As our customers look at technology upgrades and new architecture designs, they need a feature-rich high-capacity switch to consolidate the older devices they had in three-tiered networks. With Junipers silicon investment in the EX9200, this is going to be the right switch for them.
- Doug Marschke, chief technology officer, Torrey Point

Enterprise Strategy Group
A dynamic and competitive global marketplace requires organizations to be flexible and responsive. As a result, the underlying IT infrastructure and the network especially needs to be able to evolve with the business. The Juniper EX9200 Ethernet switching platform delivers a level of programmability that will allow enterprises to prepare for emerging protocols and applications. This programmability will also ensure that network operators will have the flexibility to add those future services with limited need for hardware upgrades, thus providing a high degree of assurance and investment protection.
- Bob Laliberte, senior analyst, Enterprise Strategy Group

Juniper Networks
It is critical for enterprises to design their infrastructure to adapt to emerging business requirements with programmable and flexible platforms. Junipers new EX9200 Programmable Switch, the virtualized JunosV WLAN Controller and Junos Space Network Director future-readies enterprises for adaptability and ultimately cost optimization -- a solution customers can buy today and that will grow with their needs over time.
- Jonathan Davidson, senior vice president, Campus and Data Center Business Unit, Juniper Networks

Additional Resources:

Blog by Jonathan Davidson: Overcoming Complexity with Network Agility Learn about new Juniper Networks products for the agile enterprise Product Page Info: Juniper Networks EX9200 Programmable Switch Product Page Info: JunosV Wireless LAN Controller Product Page Info: Junos Space Network Director

About Juniper Networks
Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter and Facebook.

Juniper Networks and Junos are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The Juniper Networks and Junos logo are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners.

Statements in this press release concerning Juniper Networks prospects, future products and prospective benefits to customers are forward-looking statements that involve a number of uncertainties and risks. Actual results or events could differ materially from those anticipated in those forward-looking statements as a result of certain factors, including delays in scheduled product availability, the companys failure to accurately predict emerging technological trends, and other factors listed in Juniper Networks most recent report on Form 10-K filed with the Securities and Exchange Commission. All statements made in this press release are made only as of the date of this press release. Juniper Networks undertakes no obligation to update the information in this release in the event facts or circumstances subsequently change after the date of this press release.

Copyright © 1996-2011 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Junipers Blog on 04/04/2013
Overcoming Complexity with Network Agility
Overcoming Complexity with Network Agility

The massive growth in the number of mobile users, devices and apps is overloading enterprise networks. Already strained by limited resources and shrinking IT budgets, enterprises are exploring ways the network can better support business agility, quickly deploy new applications/services, and support more users.

Copyright © 1996-2013 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Junipers Blog on 02/04/2013
Workflow Automation with Puppet for Junos OS
Workflow Automation with Puppet for Junos OS

Juniper Networks is announcing the general availability of Puppet for Junos OS, a tool that will allow IT teams to seamlessly manage both the compute and networking infrastructures in concert. Sysadmins can now deploy applications and make corresponding changes to the network with a common tool, resulting in a much more agile IT infrastructure.

Copyright © 1996-2013 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Junipers Blog on 01/04/2013
Juniper Networks Announces Date of Its First Quarter 2013 Preliminary Financial Results Conference Call and Webcast
Juniper Networks Announces Date of Its First Quarter 2013 Preliminary Financial Results Conference Call and Webcast

SUNNYVALE, CA--(Marketwire - Apr 1, 2013) - Juniper Networks (NYSE: JNPR), the industry leader in network innovation, today confirmed it will release preliminary financial results for the quarter ended Mar. 31, 2013, on Apr. 23, 2013 after the close of the market and will host a conference call at 2:00 p.m. PDT (Pacific Daylight Time), to be broadcast live over the Internet.

To participate via telephone, the toll free dial-in number is 877-407-8033, international callers dial +1-201-689-8033. Please dial in ten minutes prior to the scheduled conference call time. The webcast will be available at http://investor.juniper.net.

Juniper Networks future tentative release dates for 2013:

Q2 2013 preliminary financial results tentative date: July 23, 2013 Q3 2013 preliminary financial results tentative date: Oct. 22, 2013

About Juniper Networks
Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter and Facebook.

Juniper Networks and Junos are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The Juniper Networks and Junos logos are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners.

Copyright © 1996-2011 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Junipers Blog on 29/03/2013
It’s Not Size, But Sophistication That Matters
It’s Not Size, But Sophistication That Matters

300 billion bits per second. The largest DDoS attack ever was targeted at Spamhaus, purportedly a retaliatory attack for a decision made by Spamhaus to include a group in its widely used list of spammers. The in-your-face audacious scale has attracted global attention and interest.

Copyright © 1996-2013 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Junipers Blog on 27/03/2013
What does a 1G Internet mean to you?
What does a 1G Internet mean to you?

On March 27, the FCC will be hosting a workshop called The Gigabit Community Broadband Networks. The goal of the event is to showcase what a 1G Internet offers –from fostering next generation applications, networks and services to you guessed it...faster speed. All of this is aimed at enabling “local” performance over a broader geographical footprint while generating exceptional new experiences across vertical industry applications.

Copyright © 1996-2013 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Gigamon Blog on 26/03/2013
Big Data and Intelligent Visibility: Just Give Me the Information That’s Important to Me
by: Paul Hooper, Gigamon CEO Thirty-one thousand text messages in one month. One can only describe that as startling. Coming from the generation that preceded the texting-era, this seems like an incredible volume of communications that my two daug...
To read more please go to Click Here
Read more »
Posted by The Hot Aisle on 25/03/2013
Why take the personal out of personal computing?
In an attempt to reduce cost or improve performance at scale, some organizations are introducing low functionality, shared or cloned desktop operating systems within their virtual desktop environment. This article sets out to show that this approach is flawed, is simultaneously limiting and unnecessary and that it increases operational risk and reduces agility. New developments [...]
To read more please go to Click Here
Read more »
Posted by Junipers Blog on 21/03/2013
Innovation and Collaboration: The Way Forward to Counter Cyber Threats
Innovation and Collaboration: The Way Forward to Counter Cyber Threats

Cybersecurity has played a particularly prominent role in the news recently, and policymakers ranging from members of Congress to the President are advancing policies to address the significant risks we face as a nation and a people. Critical infrastructure is of particular concern, and the intrusions evolve on a daily basis. In a threat environment that is ever-changing, there is no room for false starts or approaches that slow response time or redirect limited resources. It is important to keep focus on the most effective ways to mitigate the cyber risk.

Copyright © 1996-2012 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Junipers Blog on 21/03/2013
Rock the Vote
Rock the Vote

Apparently there’s a new way to rock the vote. And that’s through cyber fraud and deception. Yay, technology?

Copyright © 1996-2012 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Junipers Blog on 20/03/2013
What’s so big in a small Supercore?
What’s so big in a small Supercore?

With much of the recent hype on larger density routers, many seem intrigued by the buzz around a small Supercore router, the PTX3000. Admittedly, it is the world’s most compact core router. But, isn’t traffic continuing to explode at double digit rates? And what is so “Super” about a 3.84 Tbps system?

Copyright © 1996-2012 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Junipers Blog on 18/03/2013
Juniper Networks Breaks New Ground With Worlds Smallest Supercore and Industrys Densest 100G Optical Routing Interface
Juniper Networks Breaks New Ground With Worlds Smallest Supercore and Industrys Densest 100G Optical Routing Interface

ANAHEIM, CA--(Marketwire - Mar 18, 2013) - (OFC/NFOEC) -- Juniper Networks (NYSE: JNPR), the industry leader in network innovation, today unveiled two new products designed to address key challenges facing service providers as they converge their networks to optimize their business.

First, Juniper Networks has advanced the industrys first and only Converged Supercore™ for service providers with the introduction of the Juniper Networks® PTX3000 Packet Transport Router. Unlike other core routers that are so large that they often require building retrofits and redesigns, the new Juniper Networks router at 10.6 inches in depth can be installed in virtually any space and energy constrained environment. The PTX3000 routers capacity is designed to rapidly scale over time up to 24 terabits per second (Tbps), enabling PTX3000 to simultaneously stream high-definition video to as many as three million households.

In addition, Juniper Networks announced an integrated packet-transport physical interface card (PIC) with two-ports of line rate 100 Gigabit forwarding for the entire PTX family, which will now enable service providers to cost-effectively interconnect sites more than 2000 kilometers (1243 miles) apart, the equivalent distance between San Francisco, Calif. and Denver, Colo.

News Highlights

Reshaping Optical Core Routing: The PTX3000 breaks the mold for core routers shipping at almost 4Tbps in capacity, scaling up to 24 Tbps of total capacity and a mere 10.6 inches deep -- approximately the length of an Apple iPad. The PTX3000 Packet Transport Router provides network operators with groundbreaking size, performance and efficiency, addressing practical barriers that service providers face in upgrading networks today. The router follows Junipers 2011 introduction of the worlds first Converged Supercore, a new architecture to bring together the packet and transport worlds, and addresses the scale and flexibility challenges that todays service providers face. With the PTX3000, Juniper has redefined the upgrade so that one technician can manually hand-carry and install a PTX3000 within a matter of minutes versus hours. Industrys Lowest Power Footprint: With a simplified architecture, the PTX Series allows network operators to converge their networks from nationwide backbones down to the metro core, driving significant operational efficiencies. Junipers approach collapses the traditional core architecture to deliver unprecedented scale at the lowest power footprint in the industry at less than one watt per Gigabit. PTX3000 customers can get started with a minimum configuration of as little as 1200 watts and a single power feed -- allowing smaller operators in space and power-constrained environments to be up and running quickly, while realizing all the benefits of the Converged Supercore. Rapid Service Expansion at a Fraction of the Cost: Powered by the purpose-built Junos® Express chipset and the proven reliability of the Junos operating system, the Juniper Networks PTX Series Packet Transport Router can scale for years to come, allowing operators to deliver mobile services to millions of subscribers at a cost savings of up to 65 percent over legacy architectures. No Compromise of Performance and Scale: The PTX Series delivers unmatched efficiency, wire-rate performance on all ports simultaneously, even for the smallest packet sizes, and by far the industrys lowest latency while at the same time reducing the cost of network ownership. Resetting the Density Bar for 100G Optical: Designed to cost-effectively interconnect long-haul metro sites, the integrated packet-transport 100G DWDM OTN interface card offers the industrys highest density 100G optical routing capabilities for use in open, standard-based architectures. It effectively reduces network complexity and offers transport performance with packet advantages. The new optical PIC also delivers the flexibility to operate across the entire PTX Series.

The PTX Series, in combination with the T Series Core Routers, the MX Series 3D Universal Edge Routers, the ACX Series Universal Access Routers and Junos Space Network Management deliver a unique architecture for the future of service provider networks, promoting long-term investment protection by flexibly scaling total capacity to meet ever-increasing consumer and business traffic.

Supporting Quotes

XO Communications 
Our business continuously evolves at the escalating pace of mobility and emerging cloud-based applications. Our enterprise customers need communications services that will scale on-demand and remain highly reliable, which means our network core must have a simple architecture, be automated, and function with minimal changes. Junipers PTX Series routers enable us to meet our vision to rapidly deliver advanced communications services to customers. Our Juniper Supercore will improve our customers experience and offers the flexibility to deliver more competitive solutions in the future.
- Randy Nicklas, chief technology officer, XO Communications

Juniper Networks
To effectively deliver advanced services and remain competitive, service providers need a core network solution that will help streamline their business and reduce operational costs. The Converged Supercore is an innovative platform that enhances service provider economics while providing greater value to their subscribers. Following on the heels of the revolutionary PTX5000, the PTX3000 extends these benefits to new markets and geographies with a solution that is tailored for their specific needs.
- Rami Rahim, executive vice president, Platform Systems Division, Juniper Networks

Infonetics Research
Traffic patterns in metro networks are rising at such a high rate that requirements in the IP metro core are similar to those of previous IP backbone core networks. The PTX3000 Packet Transport Router has clearly achieved a new packaging density for core routers, with capacity to scale up to 24 Tbps in an energy efficient, 1/2 rack, 300mm ETSI form factor. We expect that these characteristics will draw the consideration of service providers as they redesign their metro networks for the future.
- Michael Howard, principal analyst, Infonetics Research

Live Demonstrations at OFC/NFOEC and MPLS World Congress
Juniper Networks will demonstrate an end-to-end service provider network solution in booth #1433 at OFC/NFOEC March 19-21, including MPLS from access to core. It will showcase Junipers award-winning ACX Series Universal Access Routers, the MX Series 3D Universal Edge Router, the game-changing Converged Supercore solution and PTX Series Packet Transport Switches. The demonstration will also highlight dynamic and application-based optimization using PCE and ALTO protocols for the delivery of differentiated services. In addition, the PTX Series will also be featured at MPLS World Congress 2013 at the Hotel Marriott Paris Rive Gauche in France, March 19-22.

Additional Resources:

Blog by Rami Rahim: The Next Chapter in the Converged Supercore Introducing a Supercore without the Super Size Juniper Networks OFC/NFOEC 2013 - http://www.juniper.net/us/en/dm/ofc/

About Juniper Networks
Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter and Facebook.

Juniper Networks and Junos are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The Juniper Networks and Junos logo and Converged Supercore are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners.

Copyright © 1996-2011 Juniper Networks, Inc.     All rights reserved                                                                                      Update preferences

                   

To read more please go to Click Here
Read more »
Posted by Gigamon Blog on 08/03/2013
Mobile World Congress 2013 Recap: Big Visibility for Big Data & Turning Big Data into Manageable Data
by: Andy Huckridge, Director of Service Provider Solutions & SME It was quite a week at Mobile World Congress. With a record attendance of around 72,000 people, this show continues to grow and grow. Which made it the perfect place to showcase Giga...
To read more please go to Click Here
Read more »
Posted by The Hot Aisle on 11/02/2013
Never mind the computer get me a coffee
Why do many small businesses spend more money on coffee and tea than on their information technology systems?  Working with Dell Computers, I recently moderated an expert Think Tank in Amsterdam covering how small and medium businesses invest in technology for growth. This was an incredibly insightful discussion from owners and managers in all types [...]
To read more please go to Click Here
Read more »
Posted by The Hot Aisle on 23/12/2012
Predictions for 2013
  Every year I spend time thinking about what changes we will see in technology during the next year. It’s always a tough call as most changes only gain a foothold slowly and gradually over time. They creep upon us so that they become the new normal, stuff that we can’t imagine not having or [...]
To read more please go to Click Here
Read more »
Posted by The Hot Aisle on 29/10/2012
Data Center Infrastructure Management – the time has come
My friends at Rackwise (RACK.OB),have just won another DCIM deal with Sandia National Laboratories to install and deploy Rackwise  in Sandia’s multiple data centers located at its principal facilities.   Sandia is a multi-program engineering and science laboratory operated and managed by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of [...]
To read more please go to Click Here
Read more »
Posted by Shehzad Merchant, Chief Strategy Officer at Gigamon on 13/09/2012
Read more »
DCS Europe TV Video Blogs

Exclusive video blogs recorded for DCS Europe TV by leading IT partners.

 

6 Feb 2014 | Other

DataCore Storage Virtualization Overview

DataCore Software-Defined Storage SANsymphony™-V is DataCores flagship 9th generation storage virtualization solution

12 Dec 2011 | Design & Facilities Management

Data Centre fitnesss

Jeremy Wallis, Systems Engineering Director UK & Ireland at NetApp talks about 'Data Centre Fitness'.  Part 1 in a 4 part series.

19 Aug 2011 | Storage Networking

Document processing and storage

Simon Shorthose, MD of ReadSoft, looks at the latest trends in the document processing market and explains what impact this will have on enterprise storage requirements.

  Earlier News »
Recruitment

Latest IT jobs from leading companies.

 

Click here for full listings»

Advertisement
Advertisement
X