Although much has been written about Storage Area Networks (SANs) over the past few years, only one in three IT shops has yet to deploy SANs, according to the April 2003 Ziff Davis Market Experts study on storage purchasing. Most of those SANs have been deployed in larger organizations, and for the most part, these storage devices are connected to each other and the systems they serve using FibreChannel wires, switches and routers.
Although FibreChannel, which solved the problems of distance limitations on SCSI cabling between disk and server, has been around for years, it is a very different networking infrastructure than is used for most of the rest of IT’s networks – namely Ethernet and TCP/IP. Although there are many benefits of having a separate network for storage, including better performance and easier troubleshooting, since you don’t have to separate out the network traffic from the storage traffic, there has been a significant downside: finding and retaining IT staff who understand how to keep FibreChannel storage networks happily running, as well as the additional investment in another network – and the additional hardware costs incurred in the switching gear to connect it all together.
But SANs aren’t the only way to get the benefits of networked storage. Network Attached Storage (NAS), which has been around even longer than SANs, is a quick, easy way to get additional file servers up and running on an existing TCP/IP Ethernet network. NAS devices can literally be installed in minutes, don’t require any special cabling or a new network, and are, usually, less expensive per gigabyte of disk, to deploy. Still, you have to be careful how you connect it – if a NAS device is attached on heavily used network segment, the storage traffic can impact the LAN traffic and vice versa. Also, for applications that demand block-level access to disk storage, such as Oracle or MS-Exchange databases, NAS devices won’t deliver the performance that good old fashioned direct attached storage (DAS) in the server cabinet offers.Internet SCSI (iSCSI) adds to the somewhat confusing array of choices. With the advent of gigabit Ethernet (gigE), and the imminent availability of 10 gigabit Ethernet, our old friend Ethernet now offers raw performance in the same ballpark or better than FibreChannel (which typically runs at one or two gigabits/second). Just as FibreChannel offers a way to extend the distance between disk drives and their associated controllers and servers, iSCSI takes the same SCSI protocols and encapsulates them in Ethernet packets over TCP/IP, which mean you can now get the benefits of a SAN, including support for block-level transfers, without having to budget, staff for, and maintain a whole new network to get those benefits.
If you’ve already gone the FibreChannel route, then not to worry – new generations of FibreChannel adapters, switches, and directors are coming with even faster speeds. But for the majority of IT shops out there who haven’t gone the SAN route yet, and who are still feeling the budget pinch that the current economic climate has thrust upon us, there is now another route to explore, an Ethernet-only SAN.
Does this spell hard times ahead for FibreChannel? Not in the short term – in fact most FibreChannel switch vendors are adding iSCSI support to their hardware, just as companies like Cisco have added FibreChannel capability to their network switches. But for companies with limited IT budget and a need to get access to high performance, block-level SAN storage, iSCSI may be the way to go.
Thursday, March 29, 2007
What you don't know about compliance and its impact on Information Lifecycle Management - Storage Networking
Many comparisons have been made between the Sarbanes-Oxley regulatory requirements and Y2K. The effect on the technology industry (and the resulting reaction from the business community at large) has striking parallels ranging from the infusion of IT spending to the paranoia and knee-jerk reactions many companies are exhibiting as they desperately seek compliance.
While the similarities to Y2K are apparent, what is prominently different is that there is not an end date where companies will survive or fail based on their business acumen and the investment applied toward becoming compliant. This is a race without a finish line. Even more unsettling is the degree to which there is not a checklist or fail-safe way to know if a company is compliant or not. Sarbanes-Oxley has been a catalyst for positive behavior, yet it is riddled with nuances open to interpretation that may only be vetted in a court of law when it is already too late.
What is required in terms of a compliance solution must take into account how each company does business and include a thorough review and subsequent identification of what is affected. This information can be utilized to perform a risk analysis to determine the potential impact if compliance is not achieved (because of complexity, cost or simply poor decision-making). New companies have been created who have specialty products to address the various sections of Sarbanes-Oxley and other regulations. Cottage industries are likely to be born from these new regulations in order to dissect and decipher the complexities of compliance. Established companies have repackaged existing products with a sexier Sarbanes-Oxley label in order to be more relevant--and capitalize on--the willingness to spend money to become compliant. Interestingly, it is the legal departments (not the IT departments) that are opening their purses.
The timing of this legislation could not have been better for the technology industry--an industry that is in dire need of a jump-start in spending. In a flat economy where companies are striving to reduce excess spending and maximize system efficiencies, suddenly there is a strong call-to-action to become compliant or else run the risk of being the next high profile offender along the lines of Enron and WorldCom, among others.
[FIGURE 1 OMITTED]
The concept drawing attention as the means to the end here is Information Lifecycle Management (ILM). However, it is important to note that the premise behind ILM is not new. Companies have been managing the lifecycle of their data for years, whether that means backing it up, migrating it to tape, or whatever context is being considered for retention. Fundamentally, the question that needs to get answered is: What technologies will allow a company to cost effectively store, manage, protect and retain data when access and availability requirements change over time?
The Catalyst
Sarbanes-Oxley created a cause and effect among global companies, mandating an immediate call to action that has been unparalleled. Yet most companies are--and will continue to be--good corporate citizens who operate within the rules and who do not make liberal assumptions about how to bend the law. Regrettably, the landscape has changed dramatically in today's post-Enron business climate.
Not only has Sarbanes-Oxley been a catalyst in IT spending, it has influenced a degree of overspending of epic proportions. In desperation, companies are investing inordinate amounts of resources in order to achieve compliance by next year's government-imposed deadline of June 15th. According to technology advisory firm AMR, companies will spend $2.5 billion on Sarbanes-Oxley compliance projects this year alone. For the IT sector, this is the best news since the dot-com bubble.
Very Few Companies Are in It for the Long Haul
Yet despite the immediate and severe actions that companies are taking to achieve compliance, it is remarkable how shortsighted most companies are as they make their investments in new IT infrastructures. While e-mail and instant messaging are the two primary targets for new controls, these communication channels are only the tip of the iceberg; archiving demands will only continue to increase and additional safeguards will be required to verify and authenticate original e-mails, recipient lists, time of delivery, return receipt and a host of other considerations. Companies will be forced to use an e-mail client that is deemed secure--and may perhaps be unable to communicate with e-mail clients that don't meet this requirement, greatly impacting interaction with outside vendors and contractors. Very few companies are planning today for what might prompt the next wave of regulatory requirements.
While many companies are making investments in storage and retention products, few are exploring the benefits of profiling or the concept that not all data is created equally. Having the capability to store volumes of e-mail is only half of the battle; being able to set policies in order to index and find relevant data within acceptable time periods is an equally important consideration that far fewer companies (or technology vendors) are considering when making large investments in IT to meet today's regulatory requirements. Since the length of time data needs to be stored is ambiguous, companies will need to find cost-effective ways to store data on formats that will not become obsolete in the near future. Saving important material on the 8-track tape player of tomorrow is futile. While there are no easy answers (or crystal balls) to predict the staying power of storage mediums, companies are wise to at least consider how they will migrate, retain and update data over time in order to meet these new regulatory requirements.
While the similarities to Y2K are apparent, what is prominently different is that there is not an end date where companies will survive or fail based on their business acumen and the investment applied toward becoming compliant. This is a race without a finish line. Even more unsettling is the degree to which there is not a checklist or fail-safe way to know if a company is compliant or not. Sarbanes-Oxley has been a catalyst for positive behavior, yet it is riddled with nuances open to interpretation that may only be vetted in a court of law when it is already too late.
What is required in terms of a compliance solution must take into account how each company does business and include a thorough review and subsequent identification of what is affected. This information can be utilized to perform a risk analysis to determine the potential impact if compliance is not achieved (because of complexity, cost or simply poor decision-making). New companies have been created who have specialty products to address the various sections of Sarbanes-Oxley and other regulations. Cottage industries are likely to be born from these new regulations in order to dissect and decipher the complexities of compliance. Established companies have repackaged existing products with a sexier Sarbanes-Oxley label in order to be more relevant--and capitalize on--the willingness to spend money to become compliant. Interestingly, it is the legal departments (not the IT departments) that are opening their purses.
The timing of this legislation could not have been better for the technology industry--an industry that is in dire need of a jump-start in spending. In a flat economy where companies are striving to reduce excess spending and maximize system efficiencies, suddenly there is a strong call-to-action to become compliant or else run the risk of being the next high profile offender along the lines of Enron and WorldCom, among others.
[FIGURE 1 OMITTED]
The concept drawing attention as the means to the end here is Information Lifecycle Management (ILM). However, it is important to note that the premise behind ILM is not new. Companies have been managing the lifecycle of their data for years, whether that means backing it up, migrating it to tape, or whatever context is being considered for retention. Fundamentally, the question that needs to get answered is: What technologies will allow a company to cost effectively store, manage, protect and retain data when access and availability requirements change over time?
The Catalyst
Sarbanes-Oxley created a cause and effect among global companies, mandating an immediate call to action that has been unparalleled. Yet most companies are--and will continue to be--good corporate citizens who operate within the rules and who do not make liberal assumptions about how to bend the law. Regrettably, the landscape has changed dramatically in today's post-Enron business climate.
Not only has Sarbanes-Oxley been a catalyst in IT spending, it has influenced a degree of overspending of epic proportions. In desperation, companies are investing inordinate amounts of resources in order to achieve compliance by next year's government-imposed deadline of June 15th. According to technology advisory firm AMR, companies will spend $2.5 billion on Sarbanes-Oxley compliance projects this year alone. For the IT sector, this is the best news since the dot-com bubble.
Very Few Companies Are in It for the Long Haul
Yet despite the immediate and severe actions that companies are taking to achieve compliance, it is remarkable how shortsighted most companies are as they make their investments in new IT infrastructures. While e-mail and instant messaging are the two primary targets for new controls, these communication channels are only the tip of the iceberg; archiving demands will only continue to increase and additional safeguards will be required to verify and authenticate original e-mails, recipient lists, time of delivery, return receipt and a host of other considerations. Companies will be forced to use an e-mail client that is deemed secure--and may perhaps be unable to communicate with e-mail clients that don't meet this requirement, greatly impacting interaction with outside vendors and contractors. Very few companies are planning today for what might prompt the next wave of regulatory requirements.
While many companies are making investments in storage and retention products, few are exploring the benefits of profiling or the concept that not all data is created equally. Having the capability to store volumes of e-mail is only half of the battle; being able to set policies in order to index and find relevant data within acceptable time periods is an equally important consideration that far fewer companies (or technology vendors) are considering when making large investments in IT to meet today's regulatory requirements. Since the length of time data needs to be stored is ambiguous, companies will need to find cost-effective ways to store data on formats that will not become obsolete in the near future. Saving important material on the 8-track tape player of tomorrow is futile. While there are no easy answers (or crystal balls) to predict the staying power of storage mediums, companies are wise to at least consider how they will migrate, retain and update data over time in order to meet these new regulatory requirements.
Cost-effective disaster recovery: with snapshot-enhanced, any-to-any data mirroring - Storage Networking
For many years mirroring of production data for disaster recovery (DR) purposes has been available for both mainframe and open systems computing environments. Unfortunately, due to the cost and complexity of these types of solutions, they have mostly been deployed in larger organizations with larger IT budgets.
However, the demand for cost-effective disaster recovery solutions has never been higher, as organizations realize the value of their stored data and the high costs associated with any type of downtime. In fact, many organizations today have established a DR strategy or requirement, but have not actually implemented a solution due to budgetary or other restrictions.
Fortunately, a new generation of affordable data mirroring solutions has emerged that brings sophisticated DR capabilities to virtually any size organization. Some of the key features include:
* Compatibility with a wide range of existing storage devicesAbility to mirror data between different devices from different vendors
* Using snapshot-enhanced mirroring to ensure data integrity and rapid recovery after a disaster or other disruption
Synchronous vs. Asynchronous Mirroring
In a synchronous mirroring environment, each time an application attempts to write data to disk, the transaction is sent to both the local and remote storage devices in parallel. It is not until both devices have committed the write to disk that the system acknowledges that the transaction is complete. The application that initiated the write must wait until it receives the acknowledgement before it can continue on to the next task.
In an asynchronous environment, each write transaction is acknowledged as soon as the local storage device completes the request, even if the remote system has not yet received and/or processed the request.
Performance Issues
From a performance standpoint, a synchronous approach will always incur some level of performance degradation--even when the two storage devices are nearby--simply because both systems must complete each transaction before the application can continue. On the other hand, since an asynchronous approach acknowledges the write request without waiting for confirmation from the remote storage device, the performance of the system is virtually identical to that of a non-mirrored system.
Cost Issues
From a cost standpoint, a synchronous approach usually requires higher bandwidth and more equipment in order to maintain acceptable performance for several reasons:
Bi-directional traffic: Since each write transaction must be transmitted to the remote system and an acknowledgement received back, the transmission infrastructure must have sufficient bandwidth and performance to avoid becoming a bottleneck in this process.
Latency during peak periods: A worst-case scenario should be factored into the design of the transmission network, since spikes in data activity could degrade overall performance, or cause application time-outs due to extended latencies.
Scalability: SANs are designed to support multiple host severs, but as the number of hosts in a SAN increases, the synchronous mirroring infrastructure may not easily or economically scale to accommodate the increased data traffic.
As a result, a synchronous solution usually requires some level of over-provisioning of both the bandwidth and the available switch ports, in order to ensure sufficient performance during peak periods.
On the other hand, an asynchronous solution usually requires minimal bandwidth, as bi-directional traffic is significantly lower and communication latencies do not affect application performance. In addition, asynchronous solutions are designed to flexibly adapt to spikes in activity by buffering transactions in a queue until sufficient bandwidth becomes available to complete each transaction.
The optimal approach is to offer both synchronous and asynchronous mirroring solutions. In doing so, it is possible to impartially analyze each user's requirements before recommending an appropriate solution.
Data Integrity During Mirroring
One of the most critical factors in selecting a mirroring solution is the ability to ensure the integrity of the data being replicated between sites. Obviously, it makes little sense to mirror data unless you are confident that the data will be usable when needed. Mirroring must address two issues when it comes to data integrity:
1. The vast majority of disasters are not a single, instantaneous event. Instead, disasters usually unfold over a period of minutes or even hours (intermittent power outages, communication link disruptions, disk-drive failures, etc). And intermittent failures are the most difficult to handle, since they can corrupt the integrity of data not just once but several times during the course of an unfolding disaster.
However, the demand for cost-effective disaster recovery solutions has never been higher, as organizations realize the value of their stored data and the high costs associated with any type of downtime. In fact, many organizations today have established a DR strategy or requirement, but have not actually implemented a solution due to budgetary or other restrictions.
Fortunately, a new generation of affordable data mirroring solutions has emerged that brings sophisticated DR capabilities to virtually any size organization. Some of the key features include:
* Compatibility with a wide range of existing storage devicesAbility to mirror data between different devices from different vendors
* Using snapshot-enhanced mirroring to ensure data integrity and rapid recovery after a disaster or other disruption
Synchronous vs. Asynchronous Mirroring
In a synchronous mirroring environment, each time an application attempts to write data to disk, the transaction is sent to both the local and remote storage devices in parallel. It is not until both devices have committed the write to disk that the system acknowledges that the transaction is complete. The application that initiated the write must wait until it receives the acknowledgement before it can continue on to the next task.
In an asynchronous environment, each write transaction is acknowledged as soon as the local storage device completes the request, even if the remote system has not yet received and/or processed the request.
Performance Issues
From a performance standpoint, a synchronous approach will always incur some level of performance degradation--even when the two storage devices are nearby--simply because both systems must complete each transaction before the application can continue. On the other hand, since an asynchronous approach acknowledges the write request without waiting for confirmation from the remote storage device, the performance of the system is virtually identical to that of a non-mirrored system.
Cost Issues
From a cost standpoint, a synchronous approach usually requires higher bandwidth and more equipment in order to maintain acceptable performance for several reasons:
Bi-directional traffic: Since each write transaction must be transmitted to the remote system and an acknowledgement received back, the transmission infrastructure must have sufficient bandwidth and performance to avoid becoming a bottleneck in this process.
Latency during peak periods: A worst-case scenario should be factored into the design of the transmission network, since spikes in data activity could degrade overall performance, or cause application time-outs due to extended latencies.
Scalability: SANs are designed to support multiple host severs, but as the number of hosts in a SAN increases, the synchronous mirroring infrastructure may not easily or economically scale to accommodate the increased data traffic.
As a result, a synchronous solution usually requires some level of over-provisioning of both the bandwidth and the available switch ports, in order to ensure sufficient performance during peak periods.
On the other hand, an asynchronous solution usually requires minimal bandwidth, as bi-directional traffic is significantly lower and communication latencies do not affect application performance. In addition, asynchronous solutions are designed to flexibly adapt to spikes in activity by buffering transactions in a queue until sufficient bandwidth becomes available to complete each transaction.
The optimal approach is to offer both synchronous and asynchronous mirroring solutions. In doing so, it is possible to impartially analyze each user's requirements before recommending an appropriate solution.
Data Integrity During Mirroring
One of the most critical factors in selecting a mirroring solution is the ability to ensure the integrity of the data being replicated between sites. Obviously, it makes little sense to mirror data unless you are confident that the data will be usable when needed. Mirroring must address two issues when it comes to data integrity:
1. The vast majority of disasters are not a single, instantaneous event. Instead, disasters usually unfold over a period of minutes or even hours (intermittent power outages, communication link disruptions, disk-drive failures, etc). And intermittent failures are the most difficult to handle, since they can corrupt the integrity of data not just once but several times during the course of an unfolding disaster.
Maximizing FC SAN investments with iSCSI - Storage Networking
Businesses have made significant investments in their Fibre Channel storage area networks. The challenge is to leverage the FC SAN so that these investments in capital equipment, staff and infrastructure deliver ever-increasing value to the business. The majority of the FC SAN investments are not raw disk capacity, but infrastructure costs such as: disaster recovery mechanisms, high availability systems, backup, monitoring, security and resource allocation tools. Equipment investments include: FC switches, routers, storage enclosures, host HBAs and backup devices. Soft dollars include management, staff, software agents, upgrades, maintenance and facility.
Increased FC SAN ROI and value comes by increasing productivity, which means adding more users, encompassing a greater number of application servers and expanding the total amount of hosted data. Many businesses have been reluctant to put more servers and clients on the SAN. High per-server attachment costs and management complexity are major factors limiting the expansion of FC SANs. This has created a dilemma for many IT managers who want to fully utilize their FC SANs but find it difficult to justify the addition of more servers or clients.
A new standard for expanding SANs was required that would allow businesses to realize the full benefits of their FC SAN investments by greatly reducing the costs associated with connecting servers to the FC SAN. The IETF/IEEE committee and industry leaders worked together to create the new iSCSI storage protocol. iSCSI solved many of the problems causing the slow expansion of FC SANs. This new storage protocol resides on top of TCP/IP over Ethernet. Unlike Fibre Channel, iSCSI uses Ethernet and does not require special equipment or cabling. It can use common Ethernet NICs (network interface cards), switches, hubs and cabling.
The Benefits of iSCSI are Clear for Expanding FC SANs:
* Based on IETF standards to insure interoperability
* iSCSI SAN connectivity costs are 1/2 to 1/5 that of FC
* Uses existing Ethernet cabling and existing network elements
* Uses common TCP/IP for global connectivity
* Leverages the existing expertise of IT professionals
* Is being quickly adopted by system, storage, and network vendors
* Provides the same reliability as FC
Leverages IP Security (e.g. SRP, CHAP, and IPSEC) Table 2 shows relative costs for connecting application servers using FC vs. connecting the same servers or clients to an iSCSI / Ethernet Storage Network.
iSCSI to FC Gateways
Using an iSCSI gateway to the FC SAN can increase the number of application servers and clients storing their data within the FC SAN infrastructure. A gateway resides at the edge of the FC SAN, between the FC SAN and Ethernet network. Hosts traffic data using iSCSI between themselves and the gateway over a common Ethernet network. The gateway converts the traffic to FC and routes it to the designated storage LUNs within the FC SAN. Gateways are equipped with 2 to 4 FC Ports for connecting to the SAN and an equal number of Gb Ethernet ports for connecting to the Ethernet network. New hosts connected to the same Ethernet network will be able to access the FC SAN through the gateway.
The SAN infrastructure does not need to change when attaching the gateway. The gateway is attached via its FC ports that resemble host HBAs. The FC SAN administrator simply needs to provide access to storage LUNs for the gateway, just as it would provide storage resources to a basic server. The gateway will discover and log on to the storage resources and read and write to them in the same manner as an FC server. Figure 1 shows a typical high availability configuration with clustered gateways being used to connect iSCSI enabled servers to the FC SAN.
Volume Management Within the Gateway
One of the major issues in extending a SAN using a gateway is the task of managing the storage resources for a few to hundreds of new hosts and clients. The number of hosts and clients connected over iSCSI can become a fairly high number and be distributed over a large geographic area or even into another country. Traditional methods for volume management are far too expensive, complex and maintenance intensive to support this type of infrastructure.
Increased FC SAN ROI and value comes by increasing productivity, which means adding more users, encompassing a greater number of application servers and expanding the total amount of hosted data. Many businesses have been reluctant to put more servers and clients on the SAN. High per-server attachment costs and management complexity are major factors limiting the expansion of FC SANs. This has created a dilemma for many IT managers who want to fully utilize their FC SANs but find it difficult to justify the addition of more servers or clients.
A new standard for expanding SANs was required that would allow businesses to realize the full benefits of their FC SAN investments by greatly reducing the costs associated with connecting servers to the FC SAN. The IETF/IEEE committee and industry leaders worked together to create the new iSCSI storage protocol. iSCSI solved many of the problems causing the slow expansion of FC SANs. This new storage protocol resides on top of TCP/IP over Ethernet. Unlike Fibre Channel, iSCSI uses Ethernet and does not require special equipment or cabling. It can use common Ethernet NICs (network interface cards), switches, hubs and cabling.
The Benefits of iSCSI are Clear for Expanding FC SANs:
* Based on IETF standards to insure interoperability
* iSCSI SAN connectivity costs are 1/2 to 1/5 that of FC
* Uses existing Ethernet cabling and existing network elements
* Uses common TCP/IP for global connectivity
* Leverages the existing expertise of IT professionals
* Is being quickly adopted by system, storage, and network vendors
* Provides the same reliability as FC
Leverages IP Security (e.g. SRP, CHAP, and IPSEC) Table 2 shows relative costs for connecting application servers using FC vs. connecting the same servers or clients to an iSCSI / Ethernet Storage Network.
iSCSI to FC Gateways
Using an iSCSI gateway to the FC SAN can increase the number of application servers and clients storing their data within the FC SAN infrastructure. A gateway resides at the edge of the FC SAN, between the FC SAN and Ethernet network. Hosts traffic data using iSCSI between themselves and the gateway over a common Ethernet network. The gateway converts the traffic to FC and routes it to the designated storage LUNs within the FC SAN. Gateways are equipped with 2 to 4 FC Ports for connecting to the SAN and an equal number of Gb Ethernet ports for connecting to the Ethernet network. New hosts connected to the same Ethernet network will be able to access the FC SAN through the gateway.
The SAN infrastructure does not need to change when attaching the gateway. The gateway is attached via its FC ports that resemble host HBAs. The FC SAN administrator simply needs to provide access to storage LUNs for the gateway, just as it would provide storage resources to a basic server. The gateway will discover and log on to the storage resources and read and write to them in the same manner as an FC server. Figure 1 shows a typical high availability configuration with clustered gateways being used to connect iSCSI enabled servers to the FC SAN.
Volume Management Within the Gateway
One of the major issues in extending a SAN using a gateway is the task of managing the storage resources for a few to hundreds of new hosts and clients. The number of hosts and clients connected over iSCSI can become a fairly high number and be distributed over a large geographic area or even into another country. Traditional methods for volume management are far too expensive, complex and maintenance intensive to support this type of infrastructure.
Friday, March 23, 2007
Design Libraries speed WiMAX and WLAN product development
Providing preconfigured simulation setups, signal sources, and fully coded BER analysis for simulation of circuitry used in mobile BWA and 802.11n-based designs, Design Libraries enable analysis of system performance before all components are designed. Libraries work within ADS environment and Ptolemy simulator: Mobile WiMAX to streamline design and verification of OFDMA-based, last-mile service designs, and 802.11n library to streamline design and verification of MIMO-based WLAN designs.
********************
PALO ALTO, Calif., Jan. 17, 2006 -- Agilent Technologies Inc. (NYSE: A) today announced two new design exploration libraries for use with its Advanced Design System (ADS) EDA software. The Mobile WiMAX library helps wireless systems designers and verification engineers speed development of communications products for broadband wireless access (BWA) applications. The 802.11n library helps system design and verification engineers speed development of the latest high-speed WLAN products.
"Catching and eliminating system integration problems early can save hundreds of thousands of dollars in a typical design cycle," said Afshin Amini, product marketing manager with Agilent's EEsof EDA division. "With these new design exploration libraries, designers can identify DSP and analog/RF integration problems early and avoid over-specification." The IEEE 802.16e standard, generally referred to as Mobile WiMAX, specifies air interfaces for BWA systems and utilizes roaming and handoff to enable laptop and mobile phone operation. The standard is expected to energize the BWA industry and open opportunities for systems in applications where these types of wireless systems were previously cost-prohibitive.
The 802.11n design exploration library is based on the latest proposal from the Enhanced Wireless Consortium (EWC), a coalition formed to accelerate the IEEE 802.11n standard development process and promote a technology specification for next-generation wireless local area networking (WLAN) products.
Both libraries provide preconfigured simulation setups, signal sources and fully coded BER analysis for simulation of the circuitry used in mobile BWA and 802.11n-based designs. They help speed the development cycle by allowing system designers to analyze a system's performance before all of its components are designed. The Mobile WiMAX library works within the ADS environment and with the Agilent Ptolemy simulator to streamline design and verification of OFDMA- (Orthogonal Frequency Division Multiple Access) based, last-mile service designs. The 802.11n library works within the ADS environment and with the Agilent Ptolemy simulator to streamline design and verification of MIMO- (multiple input-multiple output) based WLAN designs.
Both libraries also can be imported into Agilent RF Design Environment (RFDE), allowing RFIC designers to access Mobile WiMAX and 802.11n test benches within the Cadence Virtuo Custom IC platform through links developed as part of the ongoing alliance between Agilent and Cadence Design Systems.
About Agilent ADS, Ptolemy and RFDE
Agilent ADS offers a complete set of front-to-back simulation and layout tools and instrument links for RF and microwave IC design in a single, integrated design flow. Agilent Ptolemy is a system-level simulation and design environment within ADS based on a hybrid of data flow and timed synchronous technologies that facililitates analog/RF and digital signal processing co-design. Agilent RFDE tightly integrates RF simulation technologies from Agilent ADS into Cadence's analog and mixed-signal design flow to aid in large-scale RF/mixed-signal IC design.
More information about Agilent's EDA solutions is available at www.agilent.com/find/eesof.
U.S. Pricing and Availability
The Mobile WiMAX design exploration library is available now for purchase with ADS 2005A. The 802.11n design exploration library is expected to be available for purchase with ADS 2005A in early February 2006. Agilent ADS 2005A is available now, with prices starting at approximately $8,400.
About Agilent Technologies
Agilent Technologies Inc. (NYSE: A) is the world's premier measurement company and a technology leader in communications, electronics, life sciences and chemical analysis. The company's 21,000 employees serve customers in more than 110 countries. Agilent had net revenue of $5.1 billion in fiscal 2005. Information about Agilent is available on the Web at www.agilent.com.
Information in this news release applies specifically to products available in the United States. Product availability and specifications may vary in other markets.
********************
PALO ALTO, Calif., Jan. 17, 2006 -- Agilent Technologies Inc. (NYSE: A) today announced two new design exploration libraries for use with its Advanced Design System (ADS) EDA software. The Mobile WiMAX library helps wireless systems designers and verification engineers speed development of communications products for broadband wireless access (BWA) applications. The 802.11n library helps system design and verification engineers speed development of the latest high-speed WLAN products.
"Catching and eliminating system integration problems early can save hundreds of thousands of dollars in a typical design cycle," said Afshin Amini, product marketing manager with Agilent's EEsof EDA division. "With these new design exploration libraries, designers can identify DSP and analog/RF integration problems early and avoid over-specification." The IEEE 802.16e standard, generally referred to as Mobile WiMAX, specifies air interfaces for BWA systems and utilizes roaming and handoff to enable laptop and mobile phone operation. The standard is expected to energize the BWA industry and open opportunities for systems in applications where these types of wireless systems were previously cost-prohibitive.
The 802.11n design exploration library is based on the latest proposal from the Enhanced Wireless Consortium (EWC), a coalition formed to accelerate the IEEE 802.11n standard development process and promote a technology specification for next-generation wireless local area networking (WLAN) products.
Both libraries provide preconfigured simulation setups, signal sources and fully coded BER analysis for simulation of the circuitry used in mobile BWA and 802.11n-based designs. They help speed the development cycle by allowing system designers to analyze a system's performance before all of its components are designed. The Mobile WiMAX library works within the ADS environment and with the Agilent Ptolemy simulator to streamline design and verification of OFDMA- (Orthogonal Frequency Division Multiple Access) based, last-mile service designs. The 802.11n library works within the ADS environment and with the Agilent Ptolemy simulator to streamline design and verification of MIMO- (multiple input-multiple output) based WLAN designs.
Both libraries also can be imported into Agilent RF Design Environment (RFDE), allowing RFIC designers to access Mobile WiMAX and 802.11n test benches within the Cadence Virtuo Custom IC platform through links developed as part of the ongoing alliance between Agilent and Cadence Design Systems.
About Agilent ADS, Ptolemy and RFDE
Agilent ADS offers a complete set of front-to-back simulation and layout tools and instrument links for RF and microwave IC design in a single, integrated design flow. Agilent Ptolemy is a system-level simulation and design environment within ADS based on a hybrid of data flow and timed synchronous technologies that facililitates analog/RF and digital signal processing co-design. Agilent RFDE tightly integrates RF simulation technologies from Agilent ADS into Cadence's analog and mixed-signal design flow to aid in large-scale RF/mixed-signal IC design.
More information about Agilent's EDA solutions is available at www.agilent.com/find/eesof.
U.S. Pricing and Availability
The Mobile WiMAX design exploration library is available now for purchase with ADS 2005A. The 802.11n design exploration library is expected to be available for purchase with ADS 2005A in early February 2006. Agilent ADS 2005A is available now, with prices starting at approximately $8,400.
About Agilent Technologies
Agilent Technologies Inc. (NYSE: A) is the world's premier measurement company and a technology leader in communications, electronics, life sciences and chemical analysis. The company's 21,000 employees serve customers in more than 110 countries. Agilent had net revenue of $5.1 billion in fiscal 2005. Information about Agilent is available on the Web at www.agilent.com.
Information in this news release applies specifically to products available in the United States. Product availability and specifications may vary in other markets.
Comcast's High-Speed Pro goes national … Cox offers home networking in Fairfax County … Adelphia rewards loyalty with free PPV … Honolulu rides high a
Comcast's High-Speed Internet Pro product is going national following its soft launch July 1. The HSD service for small- and home-based businesses has been available in its own markets and is now expanding to its former AT&T Broadband markets across the nation such as the San Francisco Bay Area. It features 3.5Mbps downstream and 384Kbps upstream speed and other features such as five persistent IPs, 25 MB of Internet-based storage space to help telecommuters, power users and SOHO customers transfer files between home and office and host a website, all for $95 a month, including the modem lease price. Cablevision's Optimum Online is also adding real-time financial tracking software from Money.net, including three free trial options for stock alerts and other features including its popular Screamer portfolio tracker, along with a 25% subscription discount for the broadband enhancement.
Cox Communications launched home networking to its Fairfax County, Va., customers on Aug. 1. Up to four terminals in a home can share files or get online simultaneously via one broadband connection. The wireless home networking package costs $299.95 for hardware, software and installation for two computers. Technical support costs $9.95 with a one-year commitment. Time Warner Cable's Charlotte division launched Wireless Road Runner last week for $9.95 a month on top of a Road Runner residential subscription to its 395,000 customers.
***
Adelphia is reinstating its Adelphia Awards customer loyalty program, discontinued last year when its financial difficulties surfaced. Customers who were formerly members of the program with remaining points balances will shortly be receiving information about how to redeem those points for options such as a free In Demand PPV movie coupon, free or discounted service on its PowerLink high-speed Internet service, free digital cable or free Show-time Unlimited service.
***
Surf's up! Honolulu is the top-ranked broadband market in the nation, with 40% of adults accessing the Internet at home via ISDN, DSL or cable modem, according to Scarborough Research. San Diego was No. 2 with 34%, followed by Rochester, N.Y., at 32%; Austin, Texas, and San Francisco/Oakland/San Jose tied with 31%. The lowest broadband usage was reported in the Roanoke/Lynchburg, Va., area (6%), Albuquerque/Santa Fe, N.M. (8%), and Spokane, Wash. (9%). The data was compiled from more than 200,000 interviews between August 2001 and Sept. 2002. About 2,000 phone interviews were conducted in Honolulu followed by mail questionnaires, while New York and Chicago had 10,000 participants apiece.
***
Competition watch: Qwest Communications is bundling Sony games and entertainment in its DSL packages... RCN launched HBO on Demand in its Philadelphia, Boston, Manhattan and Queens, N.Y., markets for $4.95 a month, and also launched HDTV service in Queens... Hughes Electronics is launching a WiFi package targeting America's 18,000 trailer parks, which can charge what they like while Hughes handles the billing... DirecTV's NFL Sunday Ticket package is getting interactive through a partnership with GoldPocket Interactive. Subscribers can get updates and alerts - and the ability to watch a game plus scores and stats from other games on one screen - thanks to four new channels devoted to the service. DirecTV's Deutsch-produced national ad campaign, including a TV spot with actor John Goodman and radio spot featuring comedian Dennis Miller, touting the NFL package broke last week... Comcast's recent deal to launch ESPN HD in its systems should help keep hi-def football fans in the cable fold this season. The MSO now offers HDTV service to about half its customers, or 11.5 million subs (and aims to also have VOD in 50% of its homes by Dec. 31). Cablevision and Insight Communications' pending HD programming enhancements should help keep hi-def sports fans in cable's camp, too.
***
New hi-def programming coming up on HDNet: The Bacon Brothers Live concert, Aug. 14 at 8:30 p.m.; MLS soccer (D.C. United vs. Chicago Fire) on Aug. 16 at 8:30 p.m.; Ethiopia's Twin Killers: Starvation and AIDS (Aug. 17 at 8 p.m.); original series Across America: People of Las Vegas (premieres Aug. 17 at 8:30 p.m.); Jefferson Starship: Acoustic Explorer concert (Aug. 22 at 8:30 p.m.); HDNet World Report: The Wolf at the Door, on the endangered species, on Aug. 24 at 8 p.m. HDNet has also been screening hi-def broadcasts of MLS games at Regal Cinemas in Chicago, Denver, Los Angeles and New York this summer to show off the sharper sports action to prospective HDTV set owners.
Cox Communications launched home networking to its Fairfax County, Va., customers on Aug. 1. Up to four terminals in a home can share files or get online simultaneously via one broadband connection. The wireless home networking package costs $299.95 for hardware, software and installation for two computers. Technical support costs $9.95 with a one-year commitment. Time Warner Cable's Charlotte division launched Wireless Road Runner last week for $9.95 a month on top of a Road Runner residential subscription to its 395,000 customers.
***
Adelphia is reinstating its Adelphia Awards customer loyalty program, discontinued last year when its financial difficulties surfaced. Customers who were formerly members of the program with remaining points balances will shortly be receiving information about how to redeem those points for options such as a free In Demand PPV movie coupon, free or discounted service on its PowerLink high-speed Internet service, free digital cable or free Show-time Unlimited service.
***
Surf's up! Honolulu is the top-ranked broadband market in the nation, with 40% of adults accessing the Internet at home via ISDN, DSL or cable modem, according to Scarborough Research. San Diego was No. 2 with 34%, followed by Rochester, N.Y., at 32%; Austin, Texas, and San Francisco/Oakland/San Jose tied with 31%. The lowest broadband usage was reported in the Roanoke/Lynchburg, Va., area (6%), Albuquerque/Santa Fe, N.M. (8%), and Spokane, Wash. (9%). The data was compiled from more than 200,000 interviews between August 2001 and Sept. 2002. About 2,000 phone interviews were conducted in Honolulu followed by mail questionnaires, while New York and Chicago had 10,000 participants apiece.
***
Competition watch: Qwest Communications is bundling Sony games and entertainment in its DSL packages... RCN launched HBO on Demand in its Philadelphia, Boston, Manhattan and Queens, N.Y., markets for $4.95 a month, and also launched HDTV service in Queens... Hughes Electronics is launching a WiFi package targeting America's 18,000 trailer parks, which can charge what they like while Hughes handles the billing... DirecTV's NFL Sunday Ticket package is getting interactive through a partnership with GoldPocket Interactive. Subscribers can get updates and alerts - and the ability to watch a game plus scores and stats from other games on one screen - thanks to four new channels devoted to the service. DirecTV's Deutsch-produced national ad campaign, including a TV spot with actor John Goodman and radio spot featuring comedian Dennis Miller, touting the NFL package broke last week... Comcast's recent deal to launch ESPN HD in its systems should help keep hi-def football fans in the cable fold this season. The MSO now offers HDTV service to about half its customers, or 11.5 million subs (and aims to also have VOD in 50% of its homes by Dec. 31). Cablevision and Insight Communications' pending HD programming enhancements should help keep hi-def sports fans in cable's camp, too.
***
New hi-def programming coming up on HDNet: The Bacon Brothers Live concert, Aug. 14 at 8:30 p.m.; MLS soccer (D.C. United vs. Chicago Fire) on Aug. 16 at 8:30 p.m.; Ethiopia's Twin Killers: Starvation and AIDS (Aug. 17 at 8 p.m.); original series Across America: People of Las Vegas (premieres Aug. 17 at 8:30 p.m.); Jefferson Starship: Acoustic Explorer concert (Aug. 22 at 8:30 p.m.); HDNet World Report: The Wolf at the Door, on the endangered species, on Aug. 24 at 8 p.m. HDNet has also been screening hi-def broadcasts of MLS games at Regal Cinemas in Chicago, Denver, Los Angeles and New York this summer to show off the sharper sports action to prospective HDTV set owners.
Don't forget about connecting smaller healthcare providers
What type of organization could design and implement a reliable, secure and interoperable nationwide healthcare information network (NHIN), which would allow critical medical data to be shared seamlessly and in a timely manner, while ensuring equal access to all ranges of healthcare providers?
Much thought and discussion have been given to the handling of the actual patient data record, and what it should look like when sent over the NHIN. Considerably less thought has been given to the network infrastructure that will eventually carry and house this sensitive and critical data: Who will build it? What will it look like? How will it be secured?
Some argue that the solution is to let market forces and private companies develop network solutions through competition; then, the leader and best performer will naturally rise to the top. Others opine that any project of this nature and sensitivity should be run by government agencies so that strict controls would protect the patient, and profits would not drive decisions, which could adversely affect patients' privacy and healthcare.
The best solution is to create an organization that operates autonomously as a nonprofit organization, is regulated by the federal government, and embraces operations that are transparent to the patient and the public.
This NHIN organization would offer services via a "managed services/subscription-based" model that the large system integrators and network service providers, such as AT&T, offer today.
EHR, PACS and other critical medical data could be securely stored and shared from a secure, networked central location. The goal is to ensure that small and rural healthcare providers have the same access to medical information as the major teaching hospitals in large cities, and to help drive the standardization of health records and medical data much faster in the industry.
Healthcare organizations require a longer term technology outlook than a traditional business. Traditional businesses are trained much in the same way as Pavlov's dogs: Every three to five years, the bell rings and they line up to upgrade operations with the newest and greatest technologies.
However, in the healthcare profession, network upgrades, patient data transfers, data storage and acquiring new technologies present much more risk than simply the possibility of an order being lost or a customer's bill being incorrect. Some healthcare records require storage of 10 years or more.
Will the communications technology of 10 years from now be able to support older data that has already been stored? If patient data and information truly are to be shared among the entire healthcare community in a seamless manner, would it not make sense for the healthcare community to be able to utilize a seamless network? The sad fact is that not all healthcare IT communications are equal today.
Some hospitals and clinics are far ahead in implementing new networking technologies, such as wireless LANs and Voice over IP, or have advanced imaging capabilities, such as scanning and digitally storing X-rays and MRI images. How can smaller providers, that are not as technically advanced or graciously funded, expect to interact with more advanced institutions?
An NHIN that uses a standards-based infrastructure, accessed by any broadband or dial-up Internet connection, and charges a monthly fee instead of a large up-front investment in equipment, could level the playing field in healthcare and ensure that patients have access to their medical records, no matter what their location.
The NHIN Portrait
The NHIN infrastructure should be built upon the leading-edge security, data storage, remote access and data retrieval technologies in the industry. The NHIN should house patient information using SAN (storage area network) technologies and could be accessed through a standard Web-based interface.
Healthcare providers could connect to the network through technologies, such as virtual private networks via high-speed or dial-up connections. Any healthcare professional with the proper credentials, a PC and Internet access could view critical patient information from any location.
Likewise, patients with access via an Internet-connected PC could review and retrieve their medical records and be more involved with their healthcare decisions. Patients who do not have or cannot afford Internet access would still be able to request access to their records by giving key pieces of information, such as Social Security number and date of birth.
Security technologies, such as intrusion prevention and network quarantining capabilities, would proactively protect the network from external and internal threats such as security breaches, viruses and data theft. Although in theory, the network is in one central location, several mirror sites would exist simultaneously for disaster recovery purposes, data redundancy and to increase speeds of data retrieval.
A standards-based, multivendor network infrastructure would provide network availability, data integrity, data storage and unmatched data security that only the largest hospitals and clinics could afford if they had to do it on an individual basis.
Much thought and discussion have been given to the handling of the actual patient data record, and what it should look like when sent over the NHIN. Considerably less thought has been given to the network infrastructure that will eventually carry and house this sensitive and critical data: Who will build it? What will it look like? How will it be secured?
Some argue that the solution is to let market forces and private companies develop network solutions through competition; then, the leader and best performer will naturally rise to the top. Others opine that any project of this nature and sensitivity should be run by government agencies so that strict controls would protect the patient, and profits would not drive decisions, which could adversely affect patients' privacy and healthcare.
The best solution is to create an organization that operates autonomously as a nonprofit organization, is regulated by the federal government, and embraces operations that are transparent to the patient and the public.
This NHIN organization would offer services via a "managed services/subscription-based" model that the large system integrators and network service providers, such as AT&T, offer today.
EHR, PACS and other critical medical data could be securely stored and shared from a secure, networked central location. The goal is to ensure that small and rural healthcare providers have the same access to medical information as the major teaching hospitals in large cities, and to help drive the standardization of health records and medical data much faster in the industry.
Healthcare organizations require a longer term technology outlook than a traditional business. Traditional businesses are trained much in the same way as Pavlov's dogs: Every three to five years, the bell rings and they line up to upgrade operations with the newest and greatest technologies.
However, in the healthcare profession, network upgrades, patient data transfers, data storage and acquiring new technologies present much more risk than simply the possibility of an order being lost or a customer's bill being incorrect. Some healthcare records require storage of 10 years or more.
Will the communications technology of 10 years from now be able to support older data that has already been stored? If patient data and information truly are to be shared among the entire healthcare community in a seamless manner, would it not make sense for the healthcare community to be able to utilize a seamless network? The sad fact is that not all healthcare IT communications are equal today.
Some hospitals and clinics are far ahead in implementing new networking technologies, such as wireless LANs and Voice over IP, or have advanced imaging capabilities, such as scanning and digitally storing X-rays and MRI images. How can smaller providers, that are not as technically advanced or graciously funded, expect to interact with more advanced institutions?
An NHIN that uses a standards-based infrastructure, accessed by any broadband or dial-up Internet connection, and charges a monthly fee instead of a large up-front investment in equipment, could level the playing field in healthcare and ensure that patients have access to their medical records, no matter what their location.
The NHIN Portrait
The NHIN infrastructure should be built upon the leading-edge security, data storage, remote access and data retrieval technologies in the industry. The NHIN should house patient information using SAN (storage area network) technologies and could be accessed through a standard Web-based interface.
Healthcare providers could connect to the network through technologies, such as virtual private networks via high-speed or dial-up connections. Any healthcare professional with the proper credentials, a PC and Internet access could view critical patient information from any location.
Likewise, patients with access via an Internet-connected PC could review and retrieve their medical records and be more involved with their healthcare decisions. Patients who do not have or cannot afford Internet access would still be able to request access to their records by giving key pieces of information, such as Social Security number and date of birth.
Security technologies, such as intrusion prevention and network quarantining capabilities, would proactively protect the network from external and internal threats such as security breaches, viruses and data theft. Although in theory, the network is in one central location, several mirror sites would exist simultaneously for disaster recovery purposes, data redundancy and to increase speeds of data retrieval.
A standards-based, multivendor network infrastructure would provide network availability, data integrity, data storage and unmatched data security that only the largest hospitals and clinics could afford if they had to do it on an individual basis.
Telstra deploys ADVA Metro kit. 3000 will be part of optical infrastructure solution deployed by Siemens in major Australian cities for Telstra - New
ADVA Optical Networking announced that its Fiber Service Platform (FSP) 3000 will be part of the optical infrastructure solution deployed by Siemens in major Australian cities for telecommunications carrier Telstra. Telstra awarded Siemens IC Networks with a 3-year frame contract and preferred supplier status in 2003 for the carrier's nationwide 10Gbit/s optical backbone network.
The frame contract includes ADVA's FSP 3000 carrier-class metro infrastructure system, Siemens' Multiservice Provisioning Platform Surpass hiT 70xx series, and Siemens' Surpass hit 7500 Dense Wavelength Division Multiplexing (DWDM) long-haul equipment. Siemens has a strategic partnership with ADVA, which includes reselling the FSP 3000. The FSP 3000 will be seamlessly integrated into Telstra's Synchronous Digital Hierarchy (SDH) transmission network, since it is managed by Siemens' Transport Network Management System (TNMS) at Telstra's Global Operations Centre. As Siemens builds a nationwide 10Gbit/s optical backbone network for Telstra in the coming three years, which will serve as the primary backbone for both voice and data services in Australia, the FSP 3000 will be installed at critical junctions in metro areas to interconnect Siemens' long-haul systems.
The FSP 3000 employs parallel use of DWDM and Time Division Multiplexing (TDM) technology to enable all protocols between 8Mbit/ s and 10Gbit/s and up to 256 applications to be transported over one single fiber pain The system's design supports point-to-point, linear add/drop, ring, and meshed network topologies with up to ten nodes across distances up to 500 kilometers without regeneration.
The frame contract includes ADVA's FSP 3000 carrier-class metro infrastructure system, Siemens' Multiservice Provisioning Platform Surpass hiT 70xx series, and Siemens' Surpass hit 7500 Dense Wavelength Division Multiplexing (DWDM) long-haul equipment. Siemens has a strategic partnership with ADVA, which includes reselling the FSP 3000. The FSP 3000 will be seamlessly integrated into Telstra's Synchronous Digital Hierarchy (SDH) transmission network, since it is managed by Siemens' Transport Network Management System (TNMS) at Telstra's Global Operations Centre. As Siemens builds a nationwide 10Gbit/s optical backbone network for Telstra in the coming three years, which will serve as the primary backbone for both voice and data services in Australia, the FSP 3000 will be installed at critical junctions in metro areas to interconnect Siemens' long-haul systems.
The FSP 3000 employs parallel use of DWDM and Time Division Multiplexing (TDM) technology to enable all protocols between 8Mbit/ s and 10Gbit/s and up to 256 applications to be transported over one single fiber pain The system's design supports point-to-point, linear add/drop, ring, and meshed network topologies with up to ten nodes across distances up to 500 kilometers without regeneration.
Thursday, March 08, 2007
AOL LLC launches Music Now Web Services developer site
The AOL Music Now Web Services developer site has been launched by Internet company AOL LLC, providing tools for web developers, bloggers and music fans.
The new site, which can be found at http://developer.aolmusicnow.com, allows users to add custom feeds or artist, chart, album, playlist and other music information from AOL Music Now, to other sites. It can be added to their own web site, blog, e-mail, social networking pages or the new AIM Pages service, currently offered as a beta profile.
AOL said the site offers instructions and documentation that is easy to follow and uses standard RSS feeds. It allows users to subscribe to AOL Music Now data feeds through the My AOL compliant RSS Feed reader or website, in order to create and publish updated music features within applications and sites.
The new site, which can be found at http://developer.aolmusicnow.com, allows users to add custom feeds or artist, chart, album, playlist and other music information from AOL Music Now, to other sites. It can be added to their own web site, blog, e-mail, social networking pages or the new AIM Pages service, currently offered as a beta profile.
AOL said the site offers instructions and documentation that is easy to follow and uses standard RSS feeds. It allows users to subscribe to AOL Music Now data feeds through the My AOL compliant RSS Feed reader or website, in order to create and publish updated music features within applications and sites.
Emcore and Corona cross license - Business - Corona Optical Systems Inc - Brief Article
EMCORE Corp. and Corona Optical Systems announced a cross licensing agreement for parallel optical transmitters and receivers. The agreement provides EMCORE an exclusive license to manufacture and sell OptoCube 40 parallel optical transmitter and receiver modules. In return, Corona has obtained a license to manufacture and sell EMCORE's Model 9512 twelve-channel parallel optical receiver and transmitter modules.
Corona's OptoCube 40 transmitter and receiver modules provide full electrical-to-optical conversion in a compact 13mm X 13mm package. Each module features 12 optical channels operating at speeds up to 3.35 gigabit-per-second, per channel (Gb/s/ch), resulting in an aggregate throughput of over 40 Gb/s. With its low, compact profile, OptoCube 40 provides more gigabit per square centimeter throughput than any other module currently available. The small size allows system designers to develop smaller, more efficient systems. The devices support standard surface-mount (SMT) manufacturing processes and are offered in pick-and-place compatible trays. The OptoCube 40 modules have a reach of 300m over standard bandwidth multimode fiber at maximum speeds of 3.35 Gb/s/ch.
EMCORE's 9512 transmitter and receiver modules also provide an aggregate throughput up to 40 Gb/s over 300m with full SNAP12 MSA compatibility. EMCORE's high-speed 9512 transmitter and receiver modules also support conventional DC JTAG boundary scan per IEEE 1149.1, and the IEEE P1149.6 AC-coupled JTAG boundary scan.
Corona's OptoCube 40 transmitter and receiver modules provide full electrical-to-optical conversion in a compact 13mm X 13mm package. Each module features 12 optical channels operating at speeds up to 3.35 gigabit-per-second, per channel (Gb/s/ch), resulting in an aggregate throughput of over 40 Gb/s. With its low, compact profile, OptoCube 40 provides more gigabit per square centimeter throughput than any other module currently available. The small size allows system designers to develop smaller, more efficient systems. The devices support standard surface-mount (SMT) manufacturing processes and are offered in pick-and-place compatible trays. The OptoCube 40 modules have a reach of 300m over standard bandwidth multimode fiber at maximum speeds of 3.35 Gb/s/ch.
EMCORE's 9512 transmitter and receiver modules also provide an aggregate throughput up to 40 Gb/s over 300m with full SNAP12 MSA compatibility. EMCORE's high-speed 9512 transmitter and receiver modules also support conventional DC JTAG boundary scan per IEEE 1149.1, and the IEEE P1149.6 AC-coupled JTAG boundary scan.
barista coffee : The best coffee onlin
THE EARLY 20th century was filled with predictions that the airplane, the automobile or the assembly line had made parliamentary democracy, market economies, jury trials and bills of rights irrelevant, obsolete and harmful. Today's scientific-technological revolutions (epitomized by space shuttles and the Internet) make the technologies of the early 20th century--its fabric-winged biplanes, Tin Lizzies and "Modern Times" gearwheel factories--look like quaint relics. Yet all of the "obsolete" institutions derided by the modernists of that day thrive and strengthen. The true surprise of the scientific revolutions ahead is likely to be not the technological wonders and dangers they will bring but the robustness of the civil society institutions that will nurture them.
This may seem counterintuitive to many people. Surely novel technological capabilities require novel social institutions, right? The experience of the past century argues that the opposite is the case. Institutions tend to be modified more than replaced. They do not die out unless they demonstrate actual and substantial harm, and they adapt only as much as needed to provide a viable solution to pressing problems. We should respond to the challenges facing us by strengthening an evolving framework based on our best and most successful institutions.
This may seem counterintuitive to many people. Surely novel technological capabilities require novel social institutions, right? The experience of the past century argues that the opposite is the case. Institutions tend to be modified more than replaced. They do not die out unless they demonstrate actual and substantial harm, and they adapt only as much as needed to provide a viable solution to pressing problems. We should respond to the challenges facing us by strengthening an evolving framework based on our best and most successful institutions.
Enterprise IM-ing: once a neat pop-up window for casual conversation, now a powerful networking tool speculated to revolutionize company productivity
Instant Messaging. Or how the kids like to call it these days--"IM-ing." There was a time when the technology seemed to cater to a specific segment of the population that consisted mostly of tech-friendly teens who could use the technology to secretly exchange private messages amongst each other. But things have changed. IM is all grown up and is popping up in enterprises all across the country with a myriad of applications--presence being the most promising one of all.
Vladimir Butenko, CEO of Stalker Software, places IM technology in the same category as live telephone conferencing. In his words, IM provides a medium for when the freedom to wait 15 seconds for an answer is just not an option. In a case like this, you could use e-mail but it will not get you the same kind of immediate response. "With e-mail communication, users exchange memos not phrases," said Butenko.
Research firm IDC predicts that after a cooling period in 2002-2003, the worldwide messaging applications market revenue is expected to more than double to nearly $2.4 billion by 2007. There are currently more than 100 million users of IM worldwide, and the Gartner Group predicts that by 2006, IM will be used more often than e-mail as the preferred method of messaging in the enterprise
Vladimir Butenko, CEO of Stalker Software, places IM technology in the same category as live telephone conferencing. In his words, IM provides a medium for when the freedom to wait 15 seconds for an answer is just not an option. In a case like this, you could use e-mail but it will not get you the same kind of immediate response. "With e-mail communication, users exchange memos not phrases," said Butenko.
Research firm IDC predicts that after a cooling period in 2002-2003, the worldwide messaging applications market revenue is expected to more than double to nearly $2.4 billion by 2007. There are currently more than 100 million users of IM worldwide, and the Gartner Group predicts that by 2006, IM will be used more often than e-mail as the preferred method of messaging in the enterprise
Monday, March 05, 2007
Storage Digest: RLX Makes Moves in Storage Area Networking
Blade-server pioneer RLX Technologies Inc. will look to update its image next week when it introduces a number of servers and management applications, as well as an expansion into server and storage-networking products. In addition to new management software, RLX will also for the first time offer storage and server interconnects. RLX's new 600ex Dual Gigabit Ethernet switch is a 20-port integrated switch designed for the company's 600ex blade-server chassis. RLX will also introduce on Oct. 15 its FibreChannel SAN Passthrough for connecting blade servers to external storage, and in November it will debut its FibreChannel SAN Interconnect for its server chassis. These networking devices will hit the market around the same time as RLX's three new ServerBlades, which feature 2.6-GHz, 2.8-GHz, or 3.0-GHz Pentium IV processors.PMC-Sierra Inc. last week announced its sampling of the industry's first 4.25 Gbps Fibre Channel intelligent Port Bypass Controllers and Quad CMOS SERDES devices for next-generation enterprise storage arrays and storage network applications. The PM8377 PBC 4x4G and PM8369 PBC 18x4G intelligent Port Bypass Controllers support 4.25 Gbps enterprise-class disk enclosure applications.While IT storage budgets remain tight, results of a recent survey indicate that some managers are starting to see corporate purse strings loosening for IT projects tied to their company's larger disaster recovery planning efforts. When asked about their highest priority storage projects planned for 2004, one quarter of 110 IT managers and decision makers attending the Storage Decisions Conference last week placed disaster recovery and improving their company's backup and archival systems at the top of their to-do lists for next year. As for the backup or recovery technology medium of choice, 86.4 percent of survey respondents said they still use tape for primary backup purposes, despite industry hype around disk replacing tape. Another 27 percent said they use tape just for off-site archiving.
Tut Teams With Broadcom On Home Networking - Tut Systems; MediaShare technology - Company Business and Marketing
New York -- Home Networking's standard bearer, Tut Systems, announced last week that it is teaming up with Broadcom on a new networking technology that uses in-home phone lines to build networks running at speeds of up to 10 MB per second.
The two companies said the technology, called MediaShare, will eventually be able so scale up to speeds of 60 MB per second.
The move is Broadcom's first into the home networking space, a potential market that is expected to grow considerably as the number of multi-PC households grows. Broadcom, based in Irvine, Calif., makes chips for DSL products, cable modems and local area networks.
The new partnership was the first strategic move by Pleasant Hills, Calif.-based Tut since its IPO late last month. The current 1.0 standard of home phoneline networking is based on Tut's HomeRun technology. Both Broadcom and Tut are members of the Home Phoneline Networking Alliance.
"As the number of homes with multiple PCs grows more than twice as fast as the overall penetration rate of PCs, the opportunity for networks in the home becomes increasingly significant. There are already over 800,000 homes in the U.S. which have installed networks to communicate among computers, other types of devices and peripherals," said Kevin Hause, an analyst at International Data Corp. "In 1999, we expect over 80% of U.S. in-home networking nodes to be based on phone line technology." Breaking the 10 MB per second barrier is significant because it will allow a broader use of the technology, for such applications as video, than is currently possible at 1 Mb per second, which is sufficient for peripheral-sharing and PC-to-PC file-sharing.
"This technology is key to the widespread use of new classes of IP consumer devices within the home, while maintaining compatibility with a large legacy installed base," said Broadcom CEO Henry T. Nicholas. "MediaShare is a driving force behind seamless interoperability of phoneline, powerline and wireless networks in multi-tiered home networks of the future."
The two companies said the technology, called MediaShare, will eventually be able so scale up to speeds of 60 MB per second.
The move is Broadcom's first into the home networking space, a potential market that is expected to grow considerably as the number of multi-PC households grows. Broadcom, based in Irvine, Calif., makes chips for DSL products, cable modems and local area networks.
The new partnership was the first strategic move by Pleasant Hills, Calif.-based Tut since its IPO late last month. The current 1.0 standard of home phoneline networking is based on Tut's HomeRun technology. Both Broadcom and Tut are members of the Home Phoneline Networking Alliance.
"As the number of homes with multiple PCs grows more than twice as fast as the overall penetration rate of PCs, the opportunity for networks in the home becomes increasingly significant. There are already over 800,000 homes in the U.S. which have installed networks to communicate among computers, other types of devices and peripherals," said Kevin Hause, an analyst at International Data Corp. "In 1999, we expect over 80% of U.S. in-home networking nodes to be based on phone line technology." Breaking the 10 MB per second barrier is significant because it will allow a broader use of the technology, for such applications as video, than is currently possible at 1 Mb per second, which is sufficient for peripheral-sharing and PC-to-PC file-sharing.
"This technology is key to the widespread use of new classes of IP consumer devices within the home, while maintaining compatibility with a large legacy installed base," said Broadcom CEO Henry T. Nicholas. "MediaShare is a driving force behind seamless interoperability of phoneline, powerline and wireless networks in multi-tiered home networks of the future."
DEC to shed networking business - DEC to sell its networking hardware business to Cabletron Systems - Company Business and Marketing
LAS VEGAS--At least some speculation ripping across the Fall Comdex 97 show floor is true: Digital Equipment and Cabletron Systems are about to announce a business partnership involving the sale of Digital's networking hardware business to Cabletron, according to sources close to the deal.
Unfortunately for battleweary Digital executives, this fact fueled "baseless," "ridiculous" rumors among the riffraff that "(CEO Robert) Palmer's going to break up the whole company except services to get the most for his shares." EN heard this perception over and over last week, and had before then.
The tongue-wagging went as far as Digital selling its systems unit, consolidated last spring into a Products group (EN, April 7), to Compaq. Eckhard Pfeiffer, that company's CEO, beefed up this speculation with a keynote centering on computer supplier consolidation and a list, projected for the SRO crowd, of 1998/1999 suppliers that did not include DEC.
"Eckhard does that every year," Pat Foye, Digital's commercial desktop VP, said rolling her eyes in sheer exasperation, as did every other DEC representative in Las Vegas to whom the Compaq merger question was posed. NETWORKING TO CABLETRON
Digital Equipment and Cabletron Systems are finalizing an agreement to form "a strategic relationship that would include the sale of Digital's hardware products business" but also entails Cabletron making Digital-branded network hardware.
The two companies had been reluctant to talk about the deal. A Digital Networking spokeswoman argued: "We can't comment on things going on behind the scenes." Cabletron would not comment on whether the deal would go through, but couched that answer in a policy statement that did not cast any doubt. "I can say that we are looking to acquire businesses to round out our product line and increase our presence internationally, especially in regard to the internet service provider (ISP) market," a spokesman said.
The data communications hardware business is whitehot, with units growing exponentially and price competition fierce. Chip suppliers are scrambling to build devices for modern banks, computer fax/modems, and LAN cards, to name a few of the end products involved.
With the Digital deal, Cabletron is coming back from a short acquisition hiatus. 1996 saw the company acquire five companies: ZeitNet, Network Express, Netlink, The OASys Group, and the Enterprise Networks business unit of Standard Microsystems. Most of this year was digestion of these; the charges related to them negated the companies 1997 net income of $230 million.
Senior management changes came this summer, two of which involved former Digital executives. Allan L. Jennings, formerly of DEC's Advanced Technology group, became the head of CSI Netlink, Cabletron's frame relay business unit. Steven Gray, a former DEC director of corporate marketing also became a Cabletron executive this summer.
And most importantly for the company's direction, Nynex senior officer Don Reed became CEO on Sept. 1, replacing co-founder S. Robert Levine who retires officially on Dec. 1. Cabletron's spokesman actually termed the goal of gaining a broader product line and greater international presence as "Don Reed's" intention.
THE CASE FOR COMPAQ TO BUY
Coincidentally, Compaq's Mr. Pfeiffer in his Comdex keynote last Monday morning, gave the same acquisition rationale: broadening the product line and grabbing international channels. DEC is long on both product breadth and international presence.
Mr. Pfeiffer stated that consolidation in the computer business "is not only going to continue, it's going to accelerate." Where 70 percent of the world's computer systems are supplied by the top ten suppliers today, by the year 2000 that 70 percent will come from just four companies.
One ex-Digital manager said that the results of the most recent Digital shareholder votes would lend themselves to a break-up of the company for maximum share value. Over a third of DEC's shareholders wanted an investment banker hired. Over a third wanted Mr. Palmer's chairman, president and CEO positions split between he and a new executive. A majority passed the proposal to rescind the company's "poison pill" clause, and a similar majority approved changing the company's bylaws to say that board members must be elected every year.
Digital's response on the corporate level was a policied "no comment," but a company spokesman said that the shareholder actions were strictly "advisory" and have not been enacted. Any hostile takeover of Digital now would still be met by the effects of the poison pill clause still in place, it was said.
Unfortunately for battleweary Digital executives, this fact fueled "baseless," "ridiculous" rumors among the riffraff that "(CEO Robert) Palmer's going to break up the whole company except services to get the most for his shares." EN heard this perception over and over last week, and had before then.
The tongue-wagging went as far as Digital selling its systems unit, consolidated last spring into a Products group (EN, April 7), to Compaq. Eckhard Pfeiffer, that company's CEO, beefed up this speculation with a keynote centering on computer supplier consolidation and a list, projected for the SRO crowd, of 1998/1999 suppliers that did not include DEC.
"Eckhard does that every year," Pat Foye, Digital's commercial desktop VP, said rolling her eyes in sheer exasperation, as did every other DEC representative in Las Vegas to whom the Compaq merger question was posed. NETWORKING TO CABLETRON
Digital Equipment and Cabletron Systems are finalizing an agreement to form "a strategic relationship that would include the sale of Digital's hardware products business" but also entails Cabletron making Digital-branded network hardware.
The two companies had been reluctant to talk about the deal. A Digital Networking spokeswoman argued: "We can't comment on things going on behind the scenes." Cabletron would not comment on whether the deal would go through, but couched that answer in a policy statement that did not cast any doubt. "I can say that we are looking to acquire businesses to round out our product line and increase our presence internationally, especially in regard to the internet service provider (ISP) market," a spokesman said.
The data communications hardware business is whitehot, with units growing exponentially and price competition fierce. Chip suppliers are scrambling to build devices for modern banks, computer fax/modems, and LAN cards, to name a few of the end products involved.
With the Digital deal, Cabletron is coming back from a short acquisition hiatus. 1996 saw the company acquire five companies: ZeitNet, Network Express, Netlink, The OASys Group, and the Enterprise Networks business unit of Standard Microsystems. Most of this year was digestion of these; the charges related to them negated the companies 1997 net income of $230 million.
Senior management changes came this summer, two of which involved former Digital executives. Allan L. Jennings, formerly of DEC's Advanced Technology group, became the head of CSI Netlink, Cabletron's frame relay business unit. Steven Gray, a former DEC director of corporate marketing also became a Cabletron executive this summer.
And most importantly for the company's direction, Nynex senior officer Don Reed became CEO on Sept. 1, replacing co-founder S. Robert Levine who retires officially on Dec. 1. Cabletron's spokesman actually termed the goal of gaining a broader product line and greater international presence as "Don Reed's" intention.
THE CASE FOR COMPAQ TO BUY
Coincidentally, Compaq's Mr. Pfeiffer in his Comdex keynote last Monday morning, gave the same acquisition rationale: broadening the product line and grabbing international channels. DEC is long on both product breadth and international presence.
Mr. Pfeiffer stated that consolidation in the computer business "is not only going to continue, it's going to accelerate." Where 70 percent of the world's computer systems are supplied by the top ten suppliers today, by the year 2000 that 70 percent will come from just four companies.
One ex-Digital manager said that the results of the most recent Digital shareholder votes would lend themselves to a break-up of the company for maximum share value. Over a third of DEC's shareholders wanted an investment banker hired. Over a third wanted Mr. Palmer's chairman, president and CEO positions split between he and a new executive. A majority passed the proposal to rescind the company's "poison pill" clause, and a similar majority approved changing the company's bylaws to say that board members must be elected every year.
Digital's response on the corporate level was a policied "no comment," but a company spokesman said that the shareholder actions were strictly "advisory" and have not been enacted. Any hostile takeover of Digital now would still be met by the effects of the poison pill clause still in place, it was said.
Storage Networking Industry Gains New Professional Association
Storage users on Tuesday will get their fourth new advocacy group of the year, with the launch of the Association of Storage Networking Professionals, founder Daniel Delshad said today.
For users making buying decisions about the complex and expensive world of enterprise storage, "They're still feeling very powerless when it comes to separating the hype from what's real," Delshad said, in Los Angeles.
As with the Storage Networking Industry Association's Customer Council, industry analyst Jon Toigo's Data Management Institute, and consultancy Network System Architects Inc.'s SANSecurity.com, Delshad's organization aims to create a certification program, a resource for white papers , and online forums for users to share advice. Uniquely, the ASNP site will also offer a jobs section, classifieds for used storage gear, and a request-for-proposals database, he said.
Initially, ASNP will have 14 domestic chapters, and eight overseas. Only storage customers are allowed to join, and charter members in each region will conduct membership drives. Membership will be free to the first 1,000 people, and will be $199 each if that ceiling is eclipsed, Delshad said. The membership rolls will be screened to prevent resellers and vendors from disguising themselves as users, he added. The only role vendors can have is as site sponsors, he said.
For users making buying decisions about the complex and expensive world of enterprise storage, "They're still feeling very powerless when it comes to separating the hype from what's real," Delshad said, in Los Angeles.
As with the Storage Networking Industry Association's Customer Council, industry analyst Jon Toigo's Data Management Institute, and consultancy Network System Architects Inc.'s SANSecurity.com, Delshad's organization aims to create a certification program, a resource for white papers , and online forums for users to share advice. Uniquely, the ASNP site will also offer a jobs section, classifieds for used storage gear, and a request-for-proposals database, he said.
Initially, ASNP will have 14 domestic chapters, and eight overseas. Only storage customers are allowed to join, and charter members in each region will conduct membership drives. Membership will be free to the first 1,000 people, and will be $199 each if that ceiling is eclipsed, Delshad said. The membership rolls will be screened to prevent resellers and vendors from disguising themselves as users, he added. The only role vendors can have is as site sponsors, he said.
Subscribe to:
Posts (Atom)