Sunday, December 31, 2006

Tropos Networks Expands Executive Team, Adds Saar Gillai as VP of Engineering; Former Cisco Wireless Networking Business Unit Vice President Brings En

SUNNYVALE, Calif. -- Tropos Networks, the proven leader in delivering truly ubiquitous, metro-scale, Wi-Fi mesh network systems, today announced the addition of Saar Gillai to its world-class executive management team. Mr. Gillai will serve as Tropos Networks' Vice President of Engineering. In his new role, Mr. Gillai will be responsible for the continued development of the Tropos MetroMesh router and network management product line, reinforcing and extending the company's market and technology leadership.

Mr. Gillai brings more than 20 years of engineering and management experience to Tropos Networks. Most recently, Mr. Gillai was Vice President of Engineering for the Wireless Networking Business Unit of Cisco Systems, where over the past four years he played a key role in building Cisco's enterprise WLAN product line to the number one position in the market, having developed and shipped more than $1.5 billion in products during his tenure. In addition, Mr. Gillai was instrumental in the company's WLAN acquisition and integration strategy, most recently playing a key role in the acquisition, and integration of WLAN equipment provider Airespace. During his tenure with Cisco, Mr. Gillai also held key management positions in product development for the Desktop Switching Business Unit and the Multi Service Switching Business Unit, where he worked closely with major service providers on next generation network architectures.

Prior to Cisco, Mr. Gillai held management positions at Newbridge Networks, where he played a significant role in the development of their ATMnet systems, as well as Rohde & Schwartz Canada and Wasserman Advisors.We are very fortunate to add an executive with Mr. Gillai's talent and experience to our team," said Ron Sege, President and CEO of Tropos Networks. "He brings a track record of engineering success in the networking market to our management team, and will help Tropos Networks further extend our capacity, scalability, network deployment and network management advantages over competitive solutions."

"I am extremely excited to have the opportunity to join such a dynamic company," said Saar Gillai, VP of Engineering of Tropos Networks. "It is rare to see a company achieve such immediate command of a market, and I look forward to utilizing the tremendous engineering talent within Tropos to ensure that our MetroMesh products will continue to be the chosen solution for metro-scale Wi-Fi deployments worldwide."

Tropos Networks also announced that Chris Rittler, formerly the company's Vice President of Product Development, has assumed the newly created position of Vice President of Business Development. In this role, Mr. Rittler will create partnerships with OEMs, strategic relationships with telecommunications equipment manufacturers and alliances with next-generation communications solutions partners.

"Mr. Rittler has worn two hats for an extended time, responsible for both engineering and business development," said Ron Sege. "This organizational change allows him to focus on the critical tasks of building relationships with OEM partners, leveraging telecommunications equipment manufacturers as a sales channel for Tropos products and creating a metro-scale Wi-Fi mesh networking ecosystem."

About Tropos Networks, Inc.

Tropos Networks is the proven leader in delivering truly ubiquitous, metro-scale Wi-Fi mesh network systems. We deliver the fastest, lowest cost and simplest wireless broadband access solutions, as demonstrated by the world's largest installed base of metro-scale Wi-Fi mesh networks. Our innovative MetroMesh(TM) architecture with our patented Predictive Wireless Routing Protocol(TM) (PWRP) allows public safety agencies, municipalities and service providers to quickly and easily deliver city-wide fixed and mobile multi-megabit connectivity for IP-based voice, video and data applications. PWRP is the first and only metro-scale optimized, radio-agnostic, wireless mesh routing protocol, which does with routing software what other approaches attempt to do with expensive hardware. The result is 10x better price/performance than any other approach to the broadband last mile. Tropos Networks is headquartered in Sunnyvale, California. For more information, please visit www.tropos.com, call 408-331-6800 or write to info@tropos.com.

Tropos Networks, Tropos, MetroMesh, PWRP and Metro-Scale Mesh Networking Defined are trademarks of Tropos Networks, Inc. All other brand or product names are trademarks or registered trademarks of their respective holder(s).

Saturday, December 30, 2006

Audience participation in storage networking - first in / first out - Industry Overview

Nothing adds to an experience like audience participation. Bringing the spectator into the plot adds considerably to the value of an event, as entertainers like Carol Burnett and game show icon Monty Hall can attest. In the world of computer mass storage, though, audience participation has risen to the next level. This represents a welcome change from the traditional "take it or leave it" attitude that many vendors have leveraged over the years.

SNIA's effort to involve CIOs in the plot is made flesh in the organization's Customer Executive Council. Its charter is to benefit both customers and vendors of storage networking technology. Customers benefit by having a direct voice into the storage networking industry to assert their shared storage requirements and strategic goals. Vendors benefit by having a broader exposure to the customer base and its needs. The Customer Executive Council thus provides a means to streamline the process of turning shared storage requirements into solutions, which expands business opportunities for vendors and puts viable products into the customers' hands sooner.

The council and its goals certainly play to the current marketing philosophies of the storage community. Vendors and analysts in the industry all point to the need to identify a customer's "pain" and to develop products, architectures and infrastructures that relieve that "pain" in a profitable way. The CEC is composed of corporate and institutional IT executives and managers who are responsible for the storage strategies of their organizations. End-user participants in the Customer Executive Council are not required to be SNIA members. One of the spark plugs engaged with the CEC is technologist Tom Clark from Nishan Systems. He comments, "The CEC from a strategic point and the Customer Advisory Council from a hands-on level are looking to channel unfiltered customer input into the industry, as opposed to vendor groups that filter." Clark is referring to the number of vendor user groups whose input is filtered to match a company's technology agenda or strategic worldview. The groups also sometimes keep this kind of customer input to themselves for competitive reasons.

Whether exploring pain or identifying satisfaction, input from CIOs and users is one of the grass-roots requirements in charting the technological course of mass storage hardware, software, systems and subsystems. The SNIA outreaches in this area are valuable, and could easily be some of the stars that chart the course to storage's next generations.

Friday, December 29, 2006

Cisco curriculum meets government security training standards - Information Systems Security Professional

Cisco Information Systems Security Professional Only Vendor-Specific Curriculum that Meets Standards of the National Security Agency and Committee on National Security Systems

Cisco Systems, Inc. has announced the addition of a government-specific security curriculum for network professionals. The new Information Systems Security (INFOSEC) Professional validates the knowledge and skills specified by the Committee on National Security Systems (CNSS) for federal systems engineers. The INFOSEC Professional is the only vendor-specific curriculum that meets the rigorous standards of the National Security Agency (NSA) and the CNSS.

The INFOSEC curriculum recognizes the knowledge and skills required for managing national critical information infrastructures and networks as specified by the CNSS in the NSTISSI 4011 standard. The CNSS represents a broad cross-section of federal departments and agencies and sets the training standards for information assurance professionals in government and industry. The CNSS is responsible for establishing the NSTISSI No. 4011 standard and for authorizing curriculum that maps to those standards.

"Cisco continues to set the bar for businesses and governments worldwide and we are pleased to provide the networking validation recognized by NSA and CNSS," said Greg Akers, senior vice president, Cisco Systems. "Cisco is at the forefront of security training, being the only industry vendor to provide security training that meets the federal 4011 standard. INFOSEC enhances our ability to provide end-to-end solutions to government partners with comprehensive security products, services and training scaled to meet their needs." A recent report issued by the White House titled, "National Strategy to Secure Cyberspace" stated that one of the major barriers to improving cyber security was "an inability to find sufficient numbers of adequately trained and/or appropriately certified personnel to create and manage secure systems." To meet that need, Cisco will continue to develop certification and training programs designed to improve the skills and knowledge level of federal information assurance professionals.

Candidates for the INFOSEC Professional certification must hold a valid CCNA and pass the corresponding exams. The recommended training for the CCNA certification is available from a global network of Cisco Learning Partners and the Partner E-Learning Connection. Training may be purchased via Cisco Learning Credits and are redeemable at Cisco Learning Partner locations worldwide. VUE and Prometric offer Cisco certification exams at locations worldwide.

Thursday, December 28, 2006

Using embedded platform management with WBEM/CIM: add IPMI to provide "Last Mile" Manageability for CIM-based solutions - Enterprise Networking

CIM has garnered a lot of attention over the years, much to the delight of vendors and end users alike. The DMTF organization, under the leadership of Winston Bumpus, has shepherded a process that addresses the seemingly never-ending dark tunnel of system complexity and vendor differences. By abstracting the data associated with that complexity into a set of guidelines, users and systems can communicate knowledge in an open way to the benefit of everyone. But what is under the hood of CIM? What practical purpose does it serve in a typical system or network? How does it compare and contrast with other technologies?

Equally, IPMI has also benefited from a multi-vendor consortium. The promoters, Dell, Intel, HP and NEC, over the past five years have standardized the hardware platform management interface to an infinite range of components- power, fans, chassis, controllers, sensors, etc.-and is now supported by 150 other adopters. Both CIM and IPMI thus offer key benefits. But how do they complement each other within the Data Center? How do you exploit these standards and make money doing so?

"Plus ca change, plus c'est la meme chose": The more things change, the more they stay the same. For all the innovation, today's Enterprise Data Center appears to be very similar to the one 10 years ago. Application (and the resulting server) sprawl has always challenged the applications or server administrators to set-up, deploy and manage at the farther corners of the business empire. New technologies still inundate the enterprise with promises of 'breakthrough' ROI--everything is 'web-based.' Everything can be dynamically discovered. Application interoperability is seemingly achieved in a heartbeat. New product offerings promise tantalizing benefits. Symmetrical Multi-Processing (SNIP) clusters simplify management and thus reduce TCO. Deploying new form factors, such as blades technology, appear as the ultimate weapon for dynamic resource allocation, improving price/performance using power saving designs at ever increasing MIPS/rack. Virtualization then takes its place as the new mantra--build systems that can accommodate 'Resources on Demand'--Compute, Storage, I/O are decoupled from the reality of the rack. Grid computing is now 'on tap' in true utility-like fashion. It's enough to make you tear your hair out.

Well, so much for no change--the Data Center remains at the 'center' of the 'data' universe. However, it should come as no surprise that every Data Center is different. Place the same equipment and same number of people in two identical buildings with the same goals of reducing TCO and improving reliability, availability and serviceability, and you'll get two very different results. Why? Well, leaving aside human fallibility, most systems are modular to some degree. Like Lego blocks, the construction process behind it differs. While there are many characteristics to building out a successful Data Center infrastructure, the chances of success increases when using a building block approach based on products that support standards. Vendors who embrace a modular building block approach can deliver very real benefits to the Data Center, especially in delivering interoperable management to server and network administrators. Products that support modularity have three common characteristics:

Standard Interface: A standardized interface within the device that allows you to monitor and control the components as and when it's needed without regard to the BIOS, OS, Processor or color of the box. The ability to manage a modular server like a single system is crucial. This interface then needs to be exposed to support integration with existing management points. This is vital to ensure a holistic and unified view is retained in a complex and dynamic network infrastructure. Finally, this common interface must be 'Always On' (highly available) and be programmatic so that different applications and other standards can support and extend it.

Platform Instrumentation: Platform instrumentation is key, as without a certain level of "embedded intelligence" you'll be left on the outside looking in. And because no network is truly identical or homogenous, cross platform management requirements are pragmatic. This is best achieved by using standards-based building blocks. Standards help provide support for the basic hardware elements that make up a system--processor, fans, temperature, power, chassis, etc., irrespective of the manufacturer or state of the system.

Support Extensibility: Building blocks also have to embrace differentiation. They need to support the common hardware, firmware and software elements but also allow expansion to support unique elements. They also need to be utilized and reusable by other OEMs, OSVs, IHVs and ISVs. This modularity supports differentiation by adding features specific to certain markets while also retaining existing development IP.

Wednesday, December 27, 2006

Rich Kendall, Senior Architect, Will Present Details on The Tarari T9000 in the 'Multifunction Security Equipment' Track at Linley Group's 'Designing

Tarari, Inc.:

WHO: Rich Kendall, Senior Architect at Tarari, Inc. is
participating in Linley Group's "Multifunction Security
Equipment" panel session. Kendall will highlight Tarari's
first chip-level product, the T9000, which accelerates
firewall functions, content filtering, antispam, antivirus,
and other applications. The presentation will cover the inner
workings of the T9000, typical system implementations, and how
developers can program their own "agents" for differentiated
security functions.

WHEN: Friday, September 16, 2005 at 1:15 PM PDT - 3:40 PM PDT

WHAT: Designing Security in Networking Systems
September 16, 2005
Seminar: 9:00 - 4:30 PM
Reception: 4:30 - 5:30 PM

WHERE: DoubleTree Hotel
2050 Gateway Place
San Jose, California 95110

TOPIC: Session 3 - Multifunction Security Equipment

Securing the network now requires a broad range of security
services, including network address translation (NAT),
preventing denial of service (DoS) attacks, intrusion
detection and prevention (IDS/IPS), and scanning incoming
mail for viruses and other malware. Sophisticated hardware
and software is needed to meet these evolving requirements.

Rich Kendall, a senior architect at Tarari, will present the
startup's new content-processing chip and describe how its
unique technology can be used in multifunction security
equipment.

Panelists in this session will include:

Implementing Security on Intel's Dual-Core Processors
Dileep Kulkarni, Strategic Platform Technology Architect,
Intel

Offloading Security Services to the Data Plane
Russell Dietz, CTO, Hifn

Implementing Multilayer Security for Unified Threat Management
Raghib Hussain, CTO, Cavium

A Content Processor with Nine Acceleration Engines
Rich Kendall, Senior Architect, Tarari

About Tarari, Inc.

Tarari, Inc., the award-winning acceleration company headquartered in San Diego, Calif., USA, designs and produces Tarari Content Processors that accelerate and offload compute-intensive, complex algorithms used in XML/Web Services, Network Security

Tuesday, December 26, 2006

Home networking: the next revenue stream? Operators, phone companies, ISPs and manufacturers are all angling to help consumers link their PCs, TVs and

At the National Cable and Telecommunications Association show in Chicago last May, Adelphia Communications CFO Timothy Rigas used basic math to illustrate the cable industry's fundamental revenue opportunity.

A cable operator charging $40 a month for TV service to a home can raise rates only so far before subscriber resistance or government regulation stops him. That means in order to maximize revenue on his pricey digital cable line, the operator must offer services other than TV and charge for them separately.

The answer, Rigas said, is to bundle telephony service and charge another $40. Add a two-way link to a home security service and collect $40 more. Add a video-conferencing capability and charge another fee. Soon, using little more than that basic cable line, the operator is collecting $150 per household rather than $40--without raising rates or consumer ire.

Cable operators in North America are experimenting with yet another service they can bundle into their offerings, a service AOL chairman and CEO Barry Schuler calls broadband's most powerful new application, or killer app--home networking. At its most basic, home networking connects major electronic appliances--personal computers, televisions, sound systems and more--to one another so that data and ultimately video and sound can be transmitted seamlessly among them.

No one knows how big the market is. Allied Business Intelligence estimated that the market for home gateway equipment alone--the hardware that connects all the equipment--will rise to $7.1 billion in 2006 from $267 million in 2000, excluding service fees. The Strategis Group predicts that 80% of broadband homes will have some kind of home networking by 2006.

Whatever the size, multiple-service cable operators (MSOs) in Canada and the United States are holding trials to iron out technical details and learn which services subscribers are willing to pay for. That's the key to home-networking success.

Analysts say it's still too early to pick the killer home-networking app.

"We're watching closely to see which technologies will be first to gain widespread consumer acceptance and how service providers will successfully manage the customer service responsibilities incumbent with this technology," says Strategis senior analyst Keith Kennebeck.

On the hardware side, traditional set-top-box makers like Scientific-Atlanta (S-A), Pioneer and Motorola are building home-networking equipment. CableLabs is developing CableHome standards aimed at coordinating the technical requirements of equipment that will eventually form the network.

In the United States, Comcast has begun lab testing a networking product made by Maynard, Mass.-based software company Ucentric and could move into field trials by the end of the year, according to Ucentric marketing director Paula Giancola. Comcast officials confirm the trials, but are keeping the details close to the vest. Two other large U.S. MSOs are privately experimenting with Ucentric networking as well, Giancola adds.

The most advanced home-networking trials are taking place in Canada. Toronto-based Rogers Communication, an MSO with 2.3 million cable subscribers, is working with Ucentric on one 50-sub trial and is preparing to launch a second, says Michael Lee, VP and GM of interactive television for Rogers.

Rogers jumped into home networking because, with 14% to 15% broadband penetration--higher than penetration rates in most U.S. systems--it has a more mature market and its subscribers are ready for the next step, Lee says. The Canadian MSO calls its home-networking service "Triple Play" because it includes video, data and voice.

"We're seeing a significant audience of people who are ready to do more things with their computers," he says. "Behaviors like downloading music, people who have multiple PCs and want to connect those PCs together."

The Rogers-Ucentric experiment--which uses a box custom-built by Ucentric for the trials but would likely move onto other hardware when mass marketed--offers a rainbow of services ranging from interactive television (ITV) applications and networked home computers to more exotic fare.

A popular item, Lee says, is linking telephone caller ID to the television. The name and number of the person calling would pop up in a corner of the TV screen so the viewer can decide whether to interrupt, say, Law & Order to take a call from cousin Lily. More broadly, the system allows unified networking on all phones, TVs and PCs, including voice mail.

Monday, December 25, 2006

The future of CE networking is 1394 not DVI - In the Line of Firewire

THROUGHOUT 2001, despite the recession in the electronics and semiconductor industry, the IEEE 1394 multimedia standard made significant progress.

It emerged as the definitive bus for audio and video connectivity, and gained a foothold in the auto market as the backbone for the networked vehicle. Many new CE and PC peripherals emerged with 1394-Fire Wire-i.LINK.

But a small, dedicated group of engineers still insists that digital video

interface (DVI) provides more benefits than the 1394-Fire Wire-i.LINK standard for digital TV. They continue to advocate DVI as the best answer for moving from analog to digital transmission of video and audio.

DVI has a long history, moving through several design changes before it achieved some design successes in monitors. It really wasn't conceived as a digital TV interface. It suffices as a point-to-point connector to deliver uncompressed video streams from a single source, such as a set-top box (STB) to a display. But to hook more than one digital consumer device to a TV, that TV requires an interface for each device. Each time a user changes the entertainment source, connected devices must be switched. And until recently, the audio section of the DVI specification has been proprietary.

As a result, DVI's not very useful in new PCs. In consumer electronics products, DVI is a reasonable first step in the long-term goal of a fully networked home; but to succeed in this exciting ultimate network, IEEE 1394 is the answer Still, DVI fans make a couple arguments for it. First, only DVI safeguards against copying digital content for profit that major studios want protected. They say the studios all love it, and of course they do: DVI's copy protection is really a "copy-never" scheme that deprives users of the basic right to make a personal copy of video and audio for their own, noncommercial use.

The standard that does protect illegal copying- without denying user right-is 1394. The Digital Transmission Copy Protection protocol (known as 5C) that was developed by the leading advocates of 1394 protects copyrighted video and audio over 1394. It enables three options: copy once, copy never and copy freely. Major studios led by Warner Brothers and Sony Pictures have endorsed it. Others are expected to follow this year.

Users should be their focus, and many of them are insisting on the ability to record digital video in their systems, for their own use --not for profit. The bottom-line question about the new generation of TVs is this: How many consumers will pay $4,000 or more for a next-generation digital TV that is not equipped to record in digital format? My guess: not many and not for long. That makes 1394 clearly a better choice for the consumer.

A second argument from DVI's supporters is that the cable industry supports DVI. But last year, there emerged a clear statement from the Society of Cable Television Executives (SCTE) that the 1394 standard has emerged as the preferred tool for interconnecting A/V signals on a common network, including the link between STBs and digital TV. So, 5C and 1394 appear to have the confidence of this group.

Beyond the copy protection issue, DVI backers also say that only uncompressed video provides optimal, nondistorted images. They also maintain that DVI has the ability to deliver uncompressed video and that the compressed video streamed by IEEE 1394 provides much lower output quality and also may obsolete expensive HDTVs due to format changes.

The 1394 Trade Association's view is that compressing digital signals does not distort output in any way that affects the user. Compare the limits of DVI with 1394's universal, network-creating connectivity. Because 1394's role is to enable many different devices to share high-definition video, audio, IP traffic and other information for display and recording devices, it is a better solution for the user.

2003 Systems Report

5 FIFTEEN INC. 430 Boston Street Topsfield, MA 01983 Contact: Peter Marsh, President and CEO Phone: (978) 887-6615 Fax: (978) 887-2899 Email: peter.marsh@5fifteen.com Web site: www.5fifteen.com System: Maxim Circulation Operating Environment: Windows NT/2000; Sun server Minimum Hardware Requirements: Any standard Pentium PC

*

ADVANTAGE COMPUTING SYSTEMS 3850 Ranchero Drive Ann Arbor, MI 48108 Contact: Cindy Morphew, Marketing Director Phone: (734) 327-3651 Fax: (734) 327-3620 Email: sales@AdvantageCS.com Web site: www.advantagecs.com System: Publisher's Advantage System Operating Environment: Windows NT/2000; Oracle and SQL server Minimum Hardware Requirements: Depends on system

*

AUTOMATED RESOURCES GROUP, INC.135 Chestnut Ridge Road Montvale, NJ 07645 Contact: Hank Garcia, Executive VP Phone: (201) 391-1500, ext. 514 Fax: (201) 391-3266 Email: hgarcia@callargi.com Web site: www.callargi.com System: ARGI Fulfillment System Operating Environment: Windows 2000, Windows XP Minimum Hardware Requirements: N/A (Application Service Provider)

* CSSC, INC.300 Raritan Center Parkway Edison, NJ 08818 Contact: Ernest Muir, VP, Sales and Marketing Phone: (732) 225-5555, ext. 211 Fax: (732) 417-0482 Email: salesdpt@csscinc.com Web site: www.csscinc.com System: TOPS Operating Environment: UNIX client-server, processors, vendors, relational databases, networks and GUI interfaces. Minimum Hardware Requirements: 64 MB RAM, 2 GB disk, tape backup

*

CWC SOFTWARE, INC.150 Grossman Drive Braintree, MA 02184 Contact: Andrew Conti, Sales Manager Phone: (781) 843-2010 Fax: (781) 843-8365 Email: sales@cwcsoftware.com Web site: www.cwcsoftware.com System: QuickFill Operating Environment: Windows 95, 98, NT, 2000, XP Minimum Hardware Requirements: 486PC, 16 MB RAM

*

DATASYSTEM SOLUTIONS, INC.4350 Shawnee Mission Parkway, Suite 179, Shawnee Mission, KS 66205 Contact: Lorna Fenimore, VP Phone: (913) 362-6969, ext. 133 Fax: (913) 362-6383 Email: lfenimore@datasystem.com Web site: www.datasystem.com System: The Multi-Pub System Operating Environment: NT, Linux or UNIX, with Windows, Macintosh or any Ethernet-based network Minimum Hardware Requirements: Depends on circulation size

*

GLOBAL TURNKEY SYSTEMS, INC.2001 Route 46, Suite 203 Parsippany, NJ 07054 Contact: Sherry Solomon, Sales Support (ext. 678) Phone: (973) 331-1010 or (800) 221-1746 Fax: (973) 331-0042 Email: sales@gtsystems.com Web site: www.gtsystems.com System: UNISON6 Operating Environment: UNIX or Windows Minimum Hardware Requirements: Server: NT, Pentium III, 1 gig RAM, 10 gig RAID

*

LYNX MEDIA, INC.12501 Chandler Boulevard, Suite 202 North Hollywood, CA 91607 Contact: Len Latimer, President Phone: (818) 761-5859 Fax: (818) 761-7099 Email: sales@lynxmedia.com Web site: www.lynxmedia.com System: First Edition Operating Environment: Windows 98/NT/XP/2000. NT or Novell networking Minimum Hardware Requirements: We recommend Pentium 200 Mhz or better with 64 MB memory

*

MEDIA SERVICES GROUP LTD.1 Atlantic Street Stamford, CT 06901 Contact: Alan Mendelson, VP-Sales Phone: (800) 234-4674 Fax: (203) 921-1791 Email: amendelson@msgl.com Web site: www.msgl.com System: CircWorks, BookWorks Operating Environment: Windows NT/2000; UNIX; Linux; ASP service available Minimum Hardware Requirements: Onsite: Pentium PC with 64 MB-plus memory. ASP: Windows or Mac PC plus browser

*

MENDON ASSOCIATES, INC.4195 Dundas Street West, Suite 340 Toronto, Ontario, Canada M8X 1Y4 Contact: A. Jorge de Mendonca, President Phone: (800) 361-1325 Fax: (416) 239-1076 Email: info@mendon.com Web site: www.mendon.com System: MA Circulation Manager 5.3 Operating Environment: MS DOS, Windows Minimum Hardware Requirements: Pentium II 200MWZ

*

NEXTECH SYSTEMS CORP.3671 Old Yorktown Road Shrub Oak, NY 10588 Contact: Robert Stengle, President Phone: (914) 962-6000 Fax: (914) 962-1338 Email: info@nextech-systems.com Web site: www.nextech-systems.com System: Interactive Fulfillment System Operating Environment: Networks (including Novell, Windows NT/2000 and others), DOS-VSE, MVS, Windows 95, 98/NT/2000/XP Minimum Hardware Requirements: 640 KB memory

*

PUBLISHERS SOFTWARE SYSTEMS, INC.511 Washington Street Norwood, MA 02062 Contact: Wayne Zafft, President Phone: (781) 762-8001 Fax: (781) 762-8002 Email: waynezafft@compuserve.com System: MicroScribe Operating Environment: MS-DOS, Windows Minimum Hardware Requirements: 386 PC, 2 MB RAM, 5 MB disk

*

SANDLOT CORP.250 West Center Street, Suite 200 Provo, UT 84601 Contact: Carl Berg Phone: (800) 769-7638 Fax: (801) 373-5066 Email: sales@sandlot.com Web site: www.sandlot.com System: Eclipsenet Environment: Database Server: NT/W2000 or UNIX. Application Server: NT/2000. Client: NT/W2000, Web browser or any XML-aware device. DBMS: MS SQL Server or Oracle. Web Server: Any Minimum Hardware Requirement: Pentium Class

*

Sunday, December 24, 2006

A place in the SAN: storage protocols vie for life after fiber channel: new protocols like GFP can help carriers make their Sonet/SDH systems SAN-frie

It's hard to over-emphasize the importance of disaster recovery in the post-9/11 world. By no coincidence, interest in storage area networks has risen among enterprises looking to network their databases. And as interest grows, storage solutions are evolving from dumb storage array or feature-slim NAS systems to sophisticated solutions where factors such as quality of service, features, performance, availability, network support, and standards adherence influence purchase decisions.

The demand for SAN connectivity and the inherent challenges in DIY systems present an opportunity for service providers to add storage to their corporate service portfolios, says Billy Basu, senior business development manager for optical networks at Nortel Networks.

"One of the remaining headaches for enterprises in terms of operational costs is that they need a dedicated routing network for storage, so they essentially have to run their own router network," says Basu. "That means buying protocol converters and leased lines, which is far more expensive for the enterprise than for the telco offering the service. An additional issue with leased fines is Bandwidth--a T1/E1 connection looks pretty fast for things like Web surfing, but for SAN data traffic it's slow. Imagine copying a 10GB database across a network with a T1 line on either end.

For telcos who want to offer SAN services, metro Ethernet can deliver the access bandwidth that SANs need to be attractive. The problem with Ethernet, however, Basu says, is that it's bursty, which makes it tricky for existing storage protocols like Fiber Channel.

"Fiber Channel is a credits-based system--transmit a frame and a credit is recorded," says Basu. "This means good flow control, but if you're using IP, packets will get dropped every time the traffic bursts, In Fiber Channel, that means an unstable fabric, which means the SAN is effectively down."

This is one reason of several why Fiber Channel has been something of a mixed bag as the de facto storage protocol. It's reliable and scalable, but expensive to install and maintain. It also has a history of interoperability problems.

However many vendors say it's the metro Sonet/SDH network that has to change if carriers are serious about cashing in on SAN demand. Some players suggest upgrading Sonet/SDH with GFP--Generic Framing Procedure, a standard defined under ITU G.7041 originally co-invented by Nortel and Lucent Technologies that gives Sonet/SDH the flexibility to handle both variable and fixed-packet transport. However, with new IP-based alternatives to traditional Fiber Channel coming onto the scene, some carriers may be tempted to skip the interim tech and deploy metro DWDM.

Transparent GFP

GFP is in essence a simple protocol-independent standard frame delineation and encapsulation mechanism for mapping Layer 1 and Layer 2 protocols such as Ethernet and Fiber Channel into Sonet/SDH of optical transport networks. This allows carriers to map packet data into arbitrarily sized TDM pipes--for example, taking bursty Ethernet traffic and mapping it into a fixed bandwidth channel.

In general terms, GFP is touted as a way for carriers to cost-effectively develop metro and wide-area multiservice platforms for rolling out new data services over legacy Sonet/SDH infrastructure without the need for an entire network upgrade.

However, a version of GFP called Transparent GFP (GFP-T) is designed to handle storage protocols like Fiber Channel, ESCON and FICON by acting as a sort of extension cord through the network. Data can be split up and redefined at the far end, but there is no packet processing and no termination of the data, so flow control mechanisms such as Fiber Channel are passed through transparently.

What that means for SANs, says Nortel's Basu, is that GFP gives a carrier's Sonet/SDH network the ability to offer SAN services over Ethernet at bandwidth levels that enterprises want.

"For example, if you have a 10GB database and you need to transmit data from one database to another, the service provider can offer an SLA based on how fast the database is updated, or the speed of the transmission," Basu says, "Just take out the routers and use GFP."

Another key enabler that GFP offers, Basu adds, is that it can expand the reach of SANs over much longer distances.

"You can't do long-distance SANs right now because everyone's doing it with routers and protocol converters," he says. "All those protocol conversions get in the way."

Adoption of GFP is a bit forward-looking, since GFP-compliant products are hard to come by, says Yankee Group senior analyst Pat Matthews.

"GFP is still in its infancy," he observes "For many carriers focusing on their current infrastructure and services is far more important than rolling out new services via GFP. Before implementing GFP, carriers will also have to deal with how to offer the new services."

Matthews also points out that while GFP brings carriers benefits such as the ability to launch new services while avoiding interoperability issues surrounding proprietary bandwidth framing, and mapping techniques with]n the same networks (and in the process allowing them to prolong the life of their current Sonet/SDH infrastructure), it has some drawbacks as well,

Inrange Lab Vets ADVA Networking Solutions - Brief Article

Germany's ADVA Optical Networking [FSE: ADV], a provider of optical networking technology, says Inrange Technologies [Nasdaq: INRG], which offers high-availability, enterprise connectivity and storage-networking capabilities, qualified ADVA's Fiber Service Platform (FSP) systems with its line of IN-VSN FC/9000 Fibre Channel Directors. The testing was performed at Inrange's Interoperability Initiative Lab (I3), in Lumberton, N.J. The new FSP features, including 2:1 time-division multiplexing functionality and a 2 Gigabit Fibre Channel card, were qualified along with the 64-, 128-, and 256-port models of the FC/9000.

"I3 interoperability testing assures customers that ADVA's Fiber Service Platform is fully interoperable with Inrange's FC/9000, enabling customers to extend their storage networks across the enterprise and benefit from reliable and economical data availability," says Dale Lafferty, Inrange's vice president of marketing and alliances. Both companies will be exhibiting their technologies at next month's CeBIT show Hannover, Germany.

Saturday, December 23, 2006

Home Networking: National Semiconductor to Expand Networking in the Home; Company to Include Tut Systems' HomeRun Technology in New Line of Networking

National Semiconductor Corporation Monday announced it has signed an agreement with Tut Systems to license Tut's HomeRun technology for use in its single-chip Ethernet physical layer transceiver solution for home networking.

HomeRun technology allows consumers to link computers, and share peripherals and Internet access over existing home phone lines without interfering with normal telephone installation or services. With its next-generation PHYTER device, National will combine its Ethernet technology and Tut's 1Mbps HomeRun technology onto one chip, thus allowing PC and peripheral manufacturers to address both the home and Ethernet markets with one solution.

Today more than 40 million U.S. households own PCs, and analysts(1) estimate that 15 million of these have multiple PCs. With this number expected to double by the year 2000, consumers are now looking for a simple way to network their multiple PCs together. Scheduled for production in the first half of 1999, National's PHYTER home-networking solution gives manufacturers greater flexibility in addressing this rapidly expanding market. PHYTER will be compliant with the first specification for low-cost networking using existing telephone wiring, expected to be published by the Home Phoneline Networking Alliance (HPNA) this quarter.

"National is committed to drive the future of communications," said Robert Penn, senior vice president and general manager of National's Communications and Consumer Group. "The inclusion of this new technology is a major step in our strategy to support the dramatic increase in TCP/IP traffic, whether within the home, over the Internet, or private networks Certainly this home-networking solution gives National another powerful tool to increase the viability of computing within the home, since resources can now be located where most convenient, and shared between users. "The companies' vision is to have a simple IP network within the home to interconnect low-cost Information Appliances, peripherals and Internet access devices," said Penn. "This agreement, our acquisition of ComCore for its revolutionary DSP technology, and our major investments in the development of Gigabit Ethernet are all critical elements in National's commitment to being a leader in the networking market today and in the future."

"Our HomeRun technology was designed from the outset to make it simple and cost effective for silicon providers to add a home networking capability to existing chip designs," said Sal D'Auria, president of Tut Systems. "National Semiconductor and its Cyrix subsidiary are pioneers in the networking and sub-$1,000 PC markets. Our HomeRun technology is a perfect complement to National's expertise in these areas. We are excited about the opportunity to work with National in expanding connectivity in the home."

Virtual private storage delivers large-scale storage consolidation payoff - Storage Networking

Even in this prolonged economic downturn, storage capacity continues to grow. In fact, a recent study by the Meta Group projects a 90 percent annual growth rate over the next two years. Faced with this challenge, consolidation has emerged as the Holy Grail for enterprise storage managers. It is the key to gaining control over the continuously expanding storage capacity while reigning in the rapidly escalating cost of managing storage. Declares John McArthur, Group vice president, Worldwide Storage Research at International Data Corp. (LDC), Framingham, Mass.: "Our research confirms that storage consolidation provides real operational benefits and delivers real savings."

Simply put, storage consolidation enables administrators to reduce the number of storage systems needed to support data center applications, lowering costs across the board. For example, reducing the number of storage systems in a data center frees up floor space and lowers operating expenses because the power and air conditioning requirements of the data center decrease. The real payoff, however, comes from the reduced complexity--fewer systems to be managed with corresponding increase in administrator productivity--as well as increased storage utilization and a reduction in software and maintenance costs.

Meeting the Capacity Challenge: Consolidation and SANs The first phase of storage consolidation was the relocation of storage back to the data center. Storage that had moved out of the data center with the introduction of low cost open systems servers is now migrating back as the costs of managing the growth of storage increases. Storage Area Networks (SANs) have provided the connectivity to enable large-scale consolidation around sharable pools of storage. Logical units (LUNs) of storage can be reallocated among servers from these easily expandable pools of storage through SAN management tools without the need to physically re-cable the storage devices. No longer does one server run out of storage without being able to tap excess storage on another server. SANs, in effect, allow enterprises to stay ahead of the ever-expanding need for storage capacity without worrying about how much storage each server will need when.

Beyond sheer storage capacity, enterprises are facing an even higher growth in transaction rates against that data As more businesses expand their markets and provide services online, the volume of transactions against a given amount of storage is increasing dramatically. Front office applications such as retail point-of-sale, web serving, funds transfers, online banking, airline reservations, and others, drive a host of back office application transactions. The need for real-time billing cycles, supply chain management, and CRM further increase the transaction load.

This growth in transaction rates complicates storage consolidation. An increase in transaction rates requires an increase in servers. As more transaction servers are brought on line, they need additional connectivity and bandwidth to the same storage source. Many older storage systems have had to add storage controllers to satisfy this demand for connectivity and bandwidth, even though they are using only a fraction of their maximum storage capacity. In addition to the cost of another controller, applications may need to be split into multiple instances, data may need to be duplicated; and additional software licenses are required. Adding controllers negates many of the benefits of storage consolidation.

Meeting the Bandwidth Challenge: Virtual Private Storage

As more and more servers were connected to the storage network, the number of physical ports available in a storage system also became a concern. Customers began asking for an increase in storage ports from 16 to 32 to 64, which would have added a great deal of cost and left underutilized port bandwidth on the table. While most Fibre Channel ports today have a bandwidth of 200MB/s, most applications are only transferring at 5 or 10MB/s. This will become an even bigger gap when the industry moves to 10GB/s speeds.

One possible solution was to connect many servers through the same storage port, but users were reluctant to do so since the servers all have to share the same set of LUN addresses. Only one server could use LUN 0 for system boot, and all the other LUN addresses had to be carefully partitioned between the servers in order to avoid corruption of each other's data.

Hitachi responded by virtualizing the storage ports so that each physical port looked like 128 virtual ports. Unlike physical ports that require a mode set to communicate with different host platforms and thus can handle only one flavor of host, virtual ports can support many heterogeneous open systems platforms simultaneously (see figure).

This works because each virtual port can be assigned its own virtual private storage space so that multiple heterogeneous hosts that share the same physical port no longer need to share the same LUN address space. They can reboot independently through their own LUN 0 and there is no danger of overwriting each other's data. This ensures safe multi-tenancy; that is, multiple heterogeneous hosts can safely share a common physical storage system. Virtual private storage is analogous to virtual private networks in the IF networking world.

Friday, December 22, 2006

Global namespace: The Future of File System Management, Part 2 - Storage Networking

IT administrators spend a great deal of time on file management tasks (adding users, adding file servers, rebalancing storage, setting up failover, etc.) and data movement tasks (replication, migration, consolidation, data distribution). These are tedious and time-consuming for administrators, disruptive to users, and expensive for companies.

Companies are looking for better ways to scale and manage their file systems. Global namespace provides the answer.

In "Global Namespace--The Future of File System Management, Part 1," we defined global namespace as a logical layer that sits between clients and file systems for purposes of aggregating multiple, heterogeneous file systems, and presenting file information to users and applications in a single, logical view. The benefits of a global namespace are clear and compelling:

* Users (and applications) are shielded from physical storage complexities. Administrators can add, move, rebalance, and reconfigure physical storage without affecting how users view and access it.

* Global namespace provides a platform for developing value-added functions such as data migration, server consolidation, and disaster recovery.

With a global namespace in place, the administrator can perform data management and data movement tasks in less time, without disrupting user access to files. When files are moved, links in the namespace are automatically updated, which reduces manual administration and ensures continuous client access to data.

In this article, we will discuss how a global namespace simplifies file management, how to create and deploy a namespace, and the solutions it enables in an enterprise environment.

The Problem: File (Data)

Management and File (Data)

Movement

Stephens Company has 600 marketing and engineering users who are accessing files across five file servers that are shared by the two departments located in Houston and New York City. Marketing users are accessing files via multiple drive letters that are mapped to two NetApp filers and a Windows server, and engineering users are accessing files on three servers and one filer.

There are several issues with Stephens Company's current file system environment:

* Users find it difficult to locate and access files via multiple drive letters (which are increasing).

* File server Houl_W2K_Server12 is at 90 percent capacity, while NY_NAS_Server2 is at 20 percent, which means that users are beginning to get errors as they try to save large graphic files to Server12.

* To migrate files and rebalance storage between Houl_W2K_Serverl2 tY_NAS_Server2, the administrator must disable user access to the files that are to be moved, move the files to the NY filer, reboot and bring both devices back online, revise all marketing and engineering user login scripts, and inform users that the files have a new location so that their PCs can be reconfigured to access them. This will require at least 12 hours of downtime and the manual reconfiguration of every desktop and application that accesses the files.

The Solution: Global Namespace

There is a simple, long-term solution to Stephens Company's data management and data movement issues. Deploying a global namespace will simplify data management and enable transparent data movement.

Figure 2 shows the new configuration, in which a global namespace has been inserted between users and physical storage. Users now access their files through shares called \\namespace\users\marketing and \\namespace\users\engineering. This was a non-disruptive installation, as the namespace was installed on top of the existing infrastructure. Users continue to access files in the same manner as before, with no retraining needed.

Note how the file system environment is changed by the introduction of a global namespace:

* All users see a single, logical view of files through the namespace.

* Users access all their files through a single drive letter--which will not grow, and allows them to continue accessing files in a familiar way.

* Data can be organized and presented to users in a way that makes sense to them, irrespective of how or where the data is stored.

* Data management and data movement are performed "behind the veil" of the namespace.

* Data changes are automatically updated in the namespace, and require no client reconfiguration.

* Administrators can expand, move, rebalance, and reconfigure storage without affecting how users view and access it.

* Data management and data movement require far less physical administration and are performed in less time than before.

Having a global namespace in place makes it easy for IT managers to accommodate the changing needs of an organization while also reducing storage costs.

Bluetooth is coming - wireless networking protocol - Brief Article

The wireless technology known as Bluetooth has been greeted with huge enthusiasm throughout the computer and communications industries, and soon, consumers can expect to enjoy the convenience, speed, and security of instant wireless connections.

Bluetooth wireless technology has become a global specification for "always on" wireless communication between portable devices and desktops. Simply put, it's a way for devices such as PCs, handhelds, and cell phones, to "talk" to each other and synchronize data--all without wires. As more companies begin to create Bluetooth-enabled devices, look for the technology to explode by year's end. The first set of Bluetooth-enabled devices will ship midyear.

"To meet these expectations, Bluetooth is expected to be embedded in hundreds of millions of mobile phones, PCs, laptops, and a whole range of other electronic devices in the next few years," explains Nathan Muller, author of the new book, Bluetooth Demystified (McGraw-Hill; $49.95). Bluetooth has a range of up to 30 feet, giving users greater mobility in the workspace. Unlike infrared connections, users don't have to have a line-of-sight connection to the device being accessed. Plus, without cables, the work environment looks and feels more comfortable.

Bluetooth can also be used to make wireless data connections to conventional local area networks (LANs) through an access point equipped with a Bluetooth radio transceiver that is wired to the LAN. For example, you can reply to an e-mail on your PDA, tell the device to make an Internet connection through a mobile phone, print a copy of the e-mail on a printer nearby, and record the original on the desktop PC--all while walking down the hall.

Since its development in 1994 by Ericsson, more than 1,800 companies worldwide, including Motorola, have signed on as members of the Bluetooth Special Interest Group (SIG) to build products with the wireless specification and promote the new technology in the market.

But with any new technology, there's a downside. And this one is no different. With Bluetooth, you can synchronize all of your devices only as long as they are within that 30-foot range. Beyond that, you're out of luck. And if you want to synchronize data with others, for example, you want to share your contact list with a colleague; setting up Bluetooth to do this can be a tedious and involved process. In this case, infrared might be a better option.

Thursday, December 21, 2006

SCADA networking - Wireless products - Brief Article

The Pathfinder Lx wireless internetworking controller is a next-generation networking tool for building reliable remote industrial and utility SCADA Ethernet and TCP/IP-based wide-area wireless networks. The controller integrates all the capabilities and features to support the most popular licensed (VHF, UHF, MAS) and unlicensed wireless transceivers. An integrated SCADA-device gateway supports all network-ready Ethernet RTU/PLC or I/O devices. Configuration tools make building, commissioning and maintaining mission-critical, fault-tolerant infrastructure networks quick and simple. The unit's built-in networking engine supports the building of peer-to-peer, point-to-point, multipoint and mesh topologies, including wireless relays and hubs.--Metric System

Cisco Press

Three weighty technical new Cisco guides provide overviews and solutions for systems managers and administrators. Stefan Raab and Madhavi W. Chandra's MOBILE IP TCHNOLOGY AND APPLICATIONS: REAL-WORLD SOLUTIONS FOR MOBILE IP CONFIGURATION AND MANAGEMENT (158705132X) tells how the internet and wireless communications are changing how people access information. The challenge is effectively moving data across different networks, and Mobile IP is reviewed as one solution to this problem, standardizing operations and addressing its practical applications. Lay readers will be able to access this and even more so, Jim Doherty and Neil Anderson'' HOME NETWORKING SIMPLIFIED: AN ILLUSTRATED HOME NETWORKING HANDBOOK FOR THE EVERYDAY USER (1587201364, $24.99), which explains how anyone can build a home networking system. From networking computers and securing them against hackers and viruses to going wireless and understanding the latest home networking options, HOME NETWORKING SIMPIFIED is written in simple enough language that all ages can easily understand how to get the most of the computer system. Technical network managers involved in IP communications systems will want to keep Danelle Au, et.al.'s CISCO IP COMMUNICATIONS EXPRESS: CALLMANAGER EXPRESS WITH CISCO UNITY EXPRESS (158705180X, $70.00) close at hand. Not for the amateur, CISCO IP COMMUNICATIONS EXPRESS packs in detailed technical information not available in other resources, showing how to use product features more effectively, providing solid examples from real life to help in the configuration of the program, and aiding in common troubleshooting problems. Any taking the CCIE Security written exam will want to use Henry Benjamin, CCIE's CCIE SECURITY EXAM CERTIFICATION GUIDE, SECOND EDITION (1587201356, $79.95) as their basic guide. CCIE as an important security certification--and is one of the most highly valued networking certifications one could obtain. Here's an in-depth, detailed study tool for the 2.0 exam version, which has been updated and reviewed b past and present members of the CCIE Security team at Cisco

Wednesday, December 20, 2006

Networking machine tools pays dividends

Abbassian: Shop-floor networking, or DNC software, establishes a gateway from the numerous CNCs, PLCs, touch probes, gages, bar code readers, and other shop-floor devices to the corporate network. The benefits to the shop floor are nearly identical to office PCs. Today, we take for granted that every PC in a business will be part of a corporate network. Even at home multiple PCs are networked together. The benefits either at the office or home include file sharing, printer sharing, Internet access, email, and database access. It's no different on the shop floor. What is different is that Ethernet, the hardware connection that is universal in the office and at home, has only recently entered the shop floor. The bulk of the existing CNCs, PLCs and other shop-floor devices have RS-232 ports, not Ethernet ports. This simple fact drives the need for shop-floor networking and communications software that understands RS-232 and Ethernet-based communications. Once the shop-floor network is in place, several manufacturing-specific benefits are available including file sharing, printer sharing, e-mail access, and database access. With file sharing, machinists can push a few buttons on their CNCs and remotely request the appropriate CNC programs, fixture offsets, and tool offsets from a file server. Edits to CNC programs can be easily saved back to the file server again by the machinist, and touch probe data can be automatically collected stored on the file server. All of these benefits and more can only be accomplished after successfully implementing a modern shop-floor networking and communications system.Abbassian: With one customer, we are tracking and monitoring every rivet on airplane exteriors. This process has specific challenges because the average cycle time is only three seconds. With another customer, we're tracking and monitoring the torque and number of revolutions for every nut and bolt used while assembling a turbocharger. This data is recorded per serial number for every turbocharger made. By monitoring every serial number during the production process, in process confirmation back to the traveler database ensures that every turbo is assembled correctly because every operation is tracked. Should errors occur, the assembly line is automatically halted and notification of the problem and resolution is given to shop-floor personnet in real-time. Online access over the Internet allows engineering staff in the US to monitor production in Mexico in real-time.

ME: Some shop-floor automation packages, like Predator's, include tool crib management. How critical is tool crib management for shops today?

Abbassian: For most shops, tooling cost is an unmanaged cost. Most shops have inventory control problems and tool management problems for the following reasons: re-using existing tooling is not done often enough because nobody knows the status and location of every tool, cutter, and insert, so new tooling is ordered; standardized tooling and kitting is not used in enough situations; multiple personnel with authority to purchase tooling; and tool consumption can vary by machine, part, and machinist. Problems also occur when production stops because of lack of tooling, excessive shipping, and premium prices are paid for tooling; best practices with tooling and materials and parts are not recorded or referenced; tool rework and repair is not recorded or referenced; physical tool inventories are often not performed. All of these problems are solved by successfully implementing a modern tool crib management system.

ME: How has the Internet changed the process of data collection for manufacturing?

Abbassian: The Internet allows manufacturers access to real-time production status, reports, and charts from any PC. For example, small shops can monitor their machines from home. Medium-sized manufacturers with engineering in one location can monitor manufacturing in a second location. Large manufacturers with multiple plants around the world can monitor overall production, or drill-down to a plant, a department, a cell, or even a machine, as necessary. Combining manufacturing data collection (MDC) with the Internet enables progressive companies to leverage the trends in outsourced manufacturing, industry consolidation, just-in-time, and lean manufacturing to the next logical level. MDC builds a knowledge base of your actual manufacturing capabilities. With the Internet, this production knowledge base is available anywhere in real-time.

Networking Infrastructure Glossary

DES, 3DES (Data Encryption Standard, Triple DES) A standard method of encrypting and decrypting data. A DES key has a 64-bit value; 8 bits are used to check parity, 56 bits for the encryption algorithm. Triple DES uses three 56-bit keys, for a total of 168 bits.

Diffie-Hellman A public-key cryptography protocol, first published in the 1970s. It allows two parties to establish a shared secret over an insecure communications channel and is used within IKE to establish session keys.

ESP (Encapsulating Security Payload) An encryption and validation standard used with IPsec.

IKE (Internet Key Exchange) An automatic security negotiation and key management service, used with the IPsec protocols.

IPsec (IP Security) A widely used collection of security protocols developed and supported by the IETF (Internet Engineering Task Force), which allows for private and secure communications across the public Internet. Over 40 RFCs (requests for comments) specify authentication, encryption, and key management in IPsec.

L2TP (Layer 2 Tunneling Protocol) A merging of features from PPTP and Cisco's L2F. It is used to encapsulate PPP frames and transmit them across a TCP/IP network. As an IETF standard, L2TP is supported by many VPN providers

MPLS (MultiProtocol Label Switching) An IETF–defined protocol that is used in IP traffic management. Basically, it provides a means for one router to pass on its routing priorities to another router by means of a label and without having to examine the packet and its header, thus saving the time required for the latter device to look up the address for the next node. It can also facilitate Quality of Service (QoS).

PPP (Point-to-Point Protocol) A TCP/IP-based protocol used to transmit IP packets over serial point-to-point links.

PPTP (Point-to-Point Tunneling Protocol) A tunneling protocol developed by Ascend Communications, ECI Telecom, Microsoft, and U.S. Robotics that encapsulates PPP frames over TCP/IP networks. There is no standard implementation of PPTP.

RADIUS (Remote Authentication Dial-In User Service) A client/server protocol and software package that enables remote-access servers (VPN concentrators in this case) to communicate with a central server to authenticate dial-in users and authorize their access to the requested systems or services.

RSA The public-key cryptographic system developed in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman. RSA is the most commonly used public-key encryption and authentication algorithm.

TACACS (Terminal Access Controller Access Control System) A protocol for authenticating users attempting to gain access to servers, networks, and remote-access servers. Similar to but less secure than RADIUS and TACACS+

Tuesday, December 19, 2006

Cisco Systems Announces Winners of its 6th Annual Growing with Technology Awards 2005; Distinguished Group of SMBs, Non-Profit and Public Sector Organ

SAN JOSE, Calif. -- A California robotics company, an Illinois school district, and a Georgia legal services organization are among this year's winners of the Cisco Growing with Technology Awards. Cisco Systems(R) today announced the winners of its annual awards program, which recognizes small- and medium-sized businesses (SMBs) and non-profit and public sector organizations for their unique adoption of networking solutions to drive business success.

One grand-prize winner and two runners-up in each of five categories were selected from more than 600 applications received by Cisco for this year's contest. Representatives from Cisco worked with a diverse panel of industry experts to narrow the broad field of entrants and select the 15 winning companies.

"The Cisco Growing with Technology Awards shines a spotlight on innovative companies and organizations who have improved their operations through their investment in information and networking technology," said Mark Quinn, Northern California District Director, U.S. Small Business Administration, and one of the contest's judges. "All winners clearly demonstrated that technology really does deliver added value and greater returns, and that it is critical to every part of their business." The winners in each of the five categories of the Cisco Growing with Technology Awards 2005 include:

Innovators in Customer Relationship

--Grand Prize: Elliance (21 employees), based in Pittsburgh, Pa.

--Runners-up: A2Z Computer Service and Repair (4 employees), based in Durham, N.C.; and Walker Information (250 employees), based in Indianapolis, Ind.

Innovators in Non-Profit

--Grand Prize: Georgia Legal Services Program (165 employees), based in Atlanta, Ga.

--Runners-up: National Center for Missing and Exploited Children (320 employees), based in Alexandria, Va.; and Scheurer Healthcare Network Organization (385 employees), based in Pigeon, Mich.

Taking Networking to the Bottom Line

Storage area networking also took a share of the CeBIT spotlight, as vendors sought to encourage new uses of the traditionally static networking medium. Cisco Systems Inc., of San Jose, Calif., rolled out product enhancements to enable enterprises to use storage area networks more efficiently.

Meanwhile, adding an element of flexibility to the health care industry's storage usage, IBM and Siemens AG's Medical Solutions division, in Munich, Germany, unveiled a storage-on-demand service tailored to hospitals and clinics. The notion behind IBM's e-business-on-demand effort is to allow enterprises to buy IT the way they buy electricity—paying only for what they use.

An increase in the use of MRIs, X-rays and other high- resolution digital imaging has produced expanding medical files that must be stored for long periods of time. The growing volume of data requires a high storage capacity, but hospitals cannot always afford the initial investment for it. With storage on demand, hospitals and clinics pay only for the storage they use.

With a similar philosophy, Hewlett-Packard Co. updated its automated metering technology that measures the processing power used by a CPU or server. The "pay per use" service allows an enterprise to pay only for actual usage on a monthly basis, and the usage data is automatically collected, encrypted and sent to the Palo Alto, Calif., company for billing.

HP's latest metering technology reads the use of each CPU, which can help an enterprise better respond to changes in power demand. Like IBM, HP is developing a series of on-demand offerings to save enterprises from paying for resources they do not need.

Data traffic management companies also displayed a range of updated tools to enable more efficient traffic usage and reduce networking costs. Extreme Networks Inc., also of Santa Clara, incorporated new features into its traffic management service for enterprise campus networks. The features allow advanced rate shaping so that a network manager can set bandwidth thresholds to better control incoming and outgoing traffic

Monday, December 18, 2006

UK carrier Kingston Communications now deploying infrastructure with FSP 2000 from ADVA Optical Networking - International - Brief Article

ADVA Optical Networking announced that UK carrier Kingston Communications has expanded deployments from AD VA's Fiber Service Platform (FSP) 500 systems to now include the FSP 2000.

Kingston has deployed more than one hundred FSP 500 systems since 1999 through the continued expansion of its national telecommunications network in the United Kingdom. The carrier found its niche in the market by targeting small- to medium-sized enterprises with ADVA's FSP 500 for inexpensive managed services. Initial deployments of 10/100 Ethernet connections evolved into Gigabit Ethernet and now STM-4 services. Kingston has established to-date twenty-five metro regional fiber networks and its own long distance fiber network with DWDM technology.

ADVA's FSP 2000 is currently being deployed at specific points in Kingston's network infrastructure to relieve fiber exhaustion and address mismatched fiber problems. ADVA's FSP 2000 employs parallel use of Coarse/Dense Wavelength Division Multiplexing (C/DWDM) and TDM technology and enables up to 512 applications to be transported over one single fiber pair up to 200 kilometers. Its robust character and ability to transport all protocols between 8Mbit/s and l0Gbit/s make the FSP 2000 an ideal solution particularly for large-scale, high-bandwidth storage networks. ADVA's FSP 500 is a managed fiber access and CWDM solution specifically designed as a very cost-effective system for delivering high-speed data, storage, and voice applications. It supports service speeds ranging from 2Mbit/s to 2.5Gbit/s over distances of up to 70 kilometers on singlemode fiber.

Failure is not an option: what's designed to be more robust than the electronics on the Space Shuttle? The control systems that manage engine performa

Failure is not an option." The declaration by legendary NASA Flight Director Gene Kranz during the Apollo 13 mission in 1970 has special meaning for locomotive OEMs and aftermarket suppliers whose goal is to produce microprocessor-based control platforms that can deliver very high levels of reliability and availability.

Want to test the ability of an industrial-grade microprocessor to withstand temperature and vibration extremes? If your first thought would be to send it into orbit aboard the Space Shuttle, think again. You'd probably be better off incorporating it into Electro-Motive Division's FIRE (Functionally Integrated Railroad Electronics), GE Transportation Systems' CCA (Consolidated Control Architecture), or Wabtec Railway Electronics' ETMS (Electronic Train Management) systems.

These and numerous other types of locomotive and engine control systems from such suppliers as EMD, GETS, Wabtec, ZTR Control Systems, and Kim Hotstart are changing the way locomotives are operated, maintained, and repaired. Such technologies control everything from electronic fuel injection tinning, a.c. traction current, wheelslip control, and engine cooling to LCD cab displays and onboard health monitoring and diagnostics. Regular maintenance cycles of 180 days? Prime-movers that emit significantly less NOx and particulate matter? Locomotives that produce dispatch adhesion factors 25'% greater than older models and are available better than 90% of the time? It's mostly in the electronics "The Space Shuttle is behind in architecture compared to what we're doing here," says GETS Evolution Series Project Manager Pete Lawson. By this, he's referring to the CCA's dual-redundant, ARCNET-based (an industry standard) onboard network. It uses three 233-MHz Pentium III microprocessors running a UNIX-based operating system. The processors, which he says are "very rugged," can withstand temperatures ranging from -40 degrees C to +55 degrees C, such as those encountered in extreme cold climates and long tunnels where heat tends to get trapped. Pentium IVs running at 1 GHz are not yet available for heavy industrial applications, but when they are developed, "we'll be able to offer more functionality and efficiency," says Lawson. CCA memory is stored in flash cards--"no more hard drive."

For external communications, the CCA is able to interface through a variety of networking methods--RS232, Ethernet, etc.--for talking with end-of-train devices, train control, distributed power, event recorders, GPS, and other devices.

Communications are hilly integrated through a system called LOCOCOMM[R] CMU (communications management unit) that serves as the basis for GETS's information-based services. It can host many GETS or third-party applications on its industry-standard "Wintel" platform, interfacing in real time with locomotive control and trainline systems. It's capable of processing such data as location, fuel level, and health information for wireless transmissions. A multi-mode antenna package can transmit data through a wide variety of communications systems. The remote services include Expert-on-Alert[TM], PinPoint[TM], LocoCAM[TM], and Smart Fueling[TM].

"The network is where different functions connect, and for real-time capability, data must arrive at the right time and react to operating changes at the right time," says Lawson. "Point-to-point type connections won't cut it."

Sunday, December 17, 2006

Conexant Cashes In On Networking Chips

Home networking has drawn a hive's worth of buzz over the past few years, and vendors are gearing up to meet the inevitable consumer demand. Count Conexant Systems among them.

Last week, the Newport Beach, Calif.-based semiconductor firm launched a line of home networking processors designed to power next-generation broadband gateways and wide area networks access applications. Although the CX8611x family supports both wireless and wired network configurations, Conexant product marketing engineer Gilad Aloni said he expects consumers to favor a wireless approach.

"We're seeing greater demand on the wireless end," Aloni said. "To that end, we engineered the CX8611x line so that it's compatible with any of the various 802.11 a, b and g chipsets out there.

Although Conexant did not divulge the names of gateway manufacturers that have signed on for the new line of chips, earlier versions of its silicon can be found in products from Linksys, Netgear, D-Link Belkin and Efficient Networks.

On the MSO side, Time Warner Cable has been one of the first ops to dip a toe into the home networking stream. Last year TWC kicked off a series of trials in five markets, including Los Angeles, Orlando, Fla., and Albany, N.Y. Customer satisfaction was reportedly "very high.

Inspection System checks thermal seal quality

Utilizing infrared thermal imaging technologies, Maxline 2 inspects seals and generates control outputs with no negative effect on production time or speed. Inspection tool specifically monitors thermal seal signatures and generates pass/fail results with diagnostic codes. With image de-burring feature, system can inspect continuous motion items at up to 700/min, and for indexing applications without de-burring feature, it can perform up to 300 inspections/min.

********************

Ircon is pleased to introduce an innovative new solution designed to rapidly and automatically inspect thermal seal quality within production processes.

The Maxline 2 Thermal Seal Checking System applies infrared thermal imaging technologies in a unique and innovative way - instantly inspecting seals and generating control outputs with no negative effect to production time or speed.

The solution provides a flexible, in-process, non-destructive means for inspecting every seal in a production process, even if rapid and continuous. It provides an inspection tool that specifically monitors thermal seal signatures and generates "pass/fail" results with diagnostic codes

Hot Seal Defects -- that may result from contaminants in the sealing area or overheated or unevenly-heated seal bars or heaters. This may lead to seal weakness or failure due to seal bubbling or roll-off.

Cold Seal Defects -- that may result from under-heated seal bars or heaters. This may lead to seal weakness or failure due to weak spots, gaps or voids.

Seal Voids and Creases -- that may result due to defects in seal bar or heater areas, or misallignment of sealing equipment or materials.

As sealing errors are detected, the software can alert you to the type of error that is occurring, and can be programmed to automatically relay control signals to your process system(s) and equipment. Built-in TCP/IP networking capabilities also enables remote access -- to the system and the data it generates -- for those that need it.

Set-up of inspection parameters is highly flexible, designed for tailoring to a variety of production requirements, and settings can be saved and re-set for different production runs. The system also collects and stores data about every item inspected, for traceability and reporting purposes.

An advanced, rugged infrared sensing camera is included, designed to provide many years of reliable service placed in hostile production environments.

Targeted primarily to food, drug and medical equipment manufacturers, this system provides a quality control and process validation solution meant to increase inspection speed and coverage while lowering inspection costs and product recall risks.

Additional information on the Ircon Maxline 2 Thermal Seal Checking System can be found by registering on our web site a/sealcheck.

To discuss this system in further detail, and arrange a possible live demonstration of the system within your operations, please contact an Ircon representative or distributor near you.

Ircon specializes in the design, manufacture and marketing of a wide range of industrial, non-contact, temperature measuring solutions.

Products include infrared thermometers, line scanning systems, thermal imaging cameras and software, and related accessories.

Saturday, December 16, 2006

High wire act: an installer can help juggle electronic systems and specialist

ust about any custom builder will tell you the days when new-home buyers were satisfied with a television in the den and a stereo in the living room are long gone. Clients planning custom homes are now likely to ask for a variety of electronic upgrades, from waterproof speakers in the shower to a dedicated home theater. Staying ahead of that technological wave during an extended design and construction schedule can be a challenge, requiring builders not only to know the right specialists but to keep track of what they're doing as well.

Home systems often include plasma televisions, complicated controls for lights and mechanical equipment, whole-house sound systems, and home offices complete with local area computer networks. Some features require complicated custom-made cabinetry, and linking all these components together requires enough cabling to leave even experienced designers agog. Minnesota--based architect Dale Mulfinger of SALA Architects is still amazed at how much extra wiring is required even though complex electronic systems are now routine. "The amount of spaghetti is truly amazing" he says.

Builders and architects can feel lost when it comes to recommending and installing the kind of hardware that buyers ask for. As a result, system installers are often invited to join the planning process, ideally before construction has actually started. These specialists can offer valuable advice to clients while overseeing the installation of special cabling and hardware. But with construction often taking a year or more, good communication, wire detailed planning, and clearly defined roles for everyone on the jobsite are essential lot keeping problems at bay

Builders, architects, and installers themselves are universal in suggesting that planning for electronic systems start early. "The biggest thing is that you need to get in on the ground floor, in the beginning, because all that pre-wiring needs to go in when the house is being wired," says Joe Stanton, a Rhode Island custom builder and owner of JMS Builders. "It's much cheaper to go in and pre-wire rooms than it is to go back in and try to retrofit something later. So even if the people didn't think they wanted to have the latest and the greatest for networking and computers, it's much easier to run big cable and have the option to do all that."

Even when clients don't appear interested in extensive home entertainment systems or lighting and HVAC controls, installing the cabling allows them to change their minds later. And it probably will make file house more attractive when owners want to sell.

Stanton connects his client with a specialist he knows lie Call count on, then suggests that his clients sign a separate contract with him. That saves Stanton from fielding weekend calls later about a balky television or light control panel. "I make the connection and I recommend who they could use," he says, "but they write their contract direct and I stay outside of it. It's no different than me recommending an appliance store where they can pick out their appliances. They don't actually buy them from me."

Other builders may sign on a specialist but still prefer a closer connection with the work. For example, Matthew Beardsley, a custom builder in Bozeman, Mont., relies on an outside specialist for the installation, but wants to handle any trouble calls once the client has moved in. "We typically do everything," he says. "If the client has a problem, they call us, and we then contact the appropriate person. It's part of our way of keeping the client happy." One advantage of his approach is that it makes it easier to spot areas where planning or construction could be improved next time.

Classroom and Group Extension of Family Systems Concepts

The focus of this article is the extension of family systems concepts into the classroom as well as into educational, support, and counseling groups held in schools for family members. Family systems concepts and methods can be used to pursue goals in both the cognitive/academic and affective/social domains. The article begins with information on the extension of family systems perspectives into the academic worlds of curriculum and then instruction. Those discussions are followed by information on techniques, including metaphor and retraining, as they relate to family systems concepts. The article concludes with a section on socialization, which discusses in detail a family systems method called temperature reading.

ACADEMIC CURRICULUM

This part initially focuses on Satir's (1988) five communication stances, which are characterizations of human behavior. Four of the five are dysfunctional; the last is a functional, congruent stance. Information on these stances could be considered a curriculum content area in its own right, and family members might also benefit from the information. Further, understanding the stances can help school professionals in networking, making referrals to groups, as well as in counseling families. In other words, school professionals should be familiar with the stances and how the knowledge can help them when interacting with at-risk and special-needs students and their families. Following this discussion a new focus is engaged-concerns of parents about new and controversial curricula that are being considered or have been implemented in the school

An understanding of communication stances (Satir, 1988) can help school professionals to refine their personal communication styles so that they present a single-level, congruent message. Professionals can also use knowledge about the stances to identity dysfunctional communication in the schools and then to intervene to help others recognize incongruent communication and begin to use congruent communication. Satir (1983b, 1988) provided many examples of ways in which trained professionals can help others recognize and change their communication stances so that they are congruent most of the time.

This section provides a brief background on Satir's communication stances, describes each of the five stances, and provides examples of the use of communication stances as a curricular area for students. The discussion then turns to the implementation of such a curriculum with suggestions given for exploring personal communication stances and ways to respond to the stances of students and family members

Friday, December 15, 2006

A Roadmap for the Successful Implementation of Competitive Intelligence Systems

THE SUCCESSFUL DESIGN, DEVELOPMENT AND DEPLOYMENT OF A SUCCESSFUL CI system requires a good project plan. Much [ike a roadmap, this plan serves to identify important miLestones and provide information about alternative routes that can help the project team(s) avoid delays. According to a survey by the Delphi Group, 58% of the useful knowledge of an organization is recorded information (documents and databases) and 42% resides in employee brains (Hickens 1999). Integrating knowledge management and competitive intelligence encourages their use, improves their quality and allows the firm to respond more rapidly to changing business conditions (Senge 1994), so the best CI system uses what is already inside the organization. One of the first decisions is whether to improve access to the organization's recorded information or elicit knowledge that currently resides in employee brains. Regardless of format or location, an organization's knowledge is generally filtered through both a cognitive dimension and a relatio nship dimension.

The cognitive dimension focuses on the "stuff," but to identify the important attributes of the relevant "stuff," it is important to know how it is filtered through the relationship dimension. The relationship dimension has the following characteristics: Experienced consultants have identified the following critical failure indicators: (1) lack of informed consensus; (2) acceptance of the status quo; (3) unwarranted trust in the vendor; (4) failure to support the business purpose; (5) a short term, internal, myopic approach; (6) paralysis by analysis; (7) sabotage by external predators; (8) suicide through ignoring project constraints; and (9) failure to consider business, human or technology limitations imposed on the project (Tyson, 1998). Careful planning is the best form of failure prevention. There are both management constraints and technical constraints to be considered. Management constraints involve three key problem areas--time, money and scope. The flexibility needed to deliver a quality project is severely hampered if any one or two of these three are fixed. For example, project constraints impact deadline constraints. A fixed budget with deadline constraints generally kills any chance of success. Regarding technical constraints, is there any fle xibility in terms of the tools available? It is imperative to avoid getting caught up in the "trade rag" hype, so a warning is appropriate here: NEVER buy off vendor presentations! Other key factors to consider include experience, whether legacy systems are involved and whether the system will be "bleeding edge" or a patch. It helps to know if this system will be a pilot for knowledge-sharing in the organization.

Unlike failure, success can't be guaranteed, but it is much more likely if the project includes: (1) flexible design; (2) willingness to implement a mechanized "less than ideal" system; (3) use of an evolutionary approach with prototyping; (4) giving users substantial (to total) control; (5) coordination by individual business units; and (6) active networking. Though there are many factors contributing to software project success, the presence of a committed project sponsor is one of the most important early success factors (Proccacino and Verner 2001). A committed sponsor has a significant impact on many of the project phases and project functions, including the (1) schedule estimates, (2) quality of the project team members, and (3) degree of interaction with other stakeholders.

Choosing Between Methodologies--Waterfall Lifecycle or Prototyping Lifecycle

Without early defined and agreed-upon acceptance criteria, there is no way to recognize when the project is completed or if it has been successful. Most project managers would prefer to use a waterfall lifecycle for project development as this ensures better control of the project and its schedule (Verner and Cerpa, 1997). A waterfall development methodology must begin with good requirements in order to ensure project success, as poor requirements are a major cause of project failure

Storage Changing So Fast It Even Obsoletes The Future - storage and networking planning - Industry Trend or Event

Just one year ago the IT industry was totally focused on the implications of the year 2000 and the effects that changing to a new century would have on computer systems. The impact of that event proved far less than anticipated and a year later we shift our focus to more strategic issues, trends, and observations shaping the computing industry. In particular the value of data continues to grow almost exponentially every day. Data (and storage) have become the center of the IT universe, as the computers are now satellites to the data storage infrastructure. Let's review some key trends and observations (factoids) from a variety of sources that may help with storage and networking planning as the New Year begins.

The worldwide Gigabit Ethernet packet switch market reached over 810,000 ports shipped in 2Q 2000, according to Cahners In-Stat Group. The high-tech market research firm expects the overall Gigabit Ethernet switch market to reach almost 4 million units in ports shipped and $4 billion in end-use revenue for 2000, resulting in an all time high: over 200 percent greater than the total number of ports shipped in 1999.

According to a recently published 2000 report "High Availability and Data Protection Practices" from Strategic Research, by 2003, databases will control access to more than 65% of the network's shared data. This further drives the requirement for true heterogeneous and homogeneous data-sharing.

Jupiter Research has released a study with pulse-quickening projections about the growth of the business-to-business commerce market. Jupiter's prediction that B-to-B commerce will expand from $336 million this year to $6.3 trillion in 2005 is likely to raise the heart rate of even the most composed IT administrators as they try to figure out how to accommodate such exponential expansion.

However, Seagate Technology Inc. reckons this time it's got a larger lead on the competition, having taken the capacity on its Barracuda drives from 73GB to 180GB.

Europe's application service provider market will jump from a $275.1M market in 1999 to a $13.7B market in 2005, according to international marketing consulting company Frost & Sullivan.

The picture changed in 2000, according to Internet statistics published in Tele-Geography 2001, this year's compendium of industry statistics from the Washington, DC-based firm. From 1999 to 2000, Internet bandwidth connecting Asian countries grew faster than any other region-to-region route in the world--including Internet bandwidth connected to the U.S. That meant that 13.5 percent of Asia's international Internet capacity was in-region, up from 6.2 percent the previous year. The Asian Internet is becoming less and less U.S.-centric and regional interconnection is on the upswing.

Storage Changing So Fast It Even Obsoletes The Future - storage and networking planning - Industry Trend or Event

Just one year ago the IT industry was totally focused on the implications of the year 2000 and the effects that changing to a new century would have on computer systems. The impact of that event proved far less than anticipated and a year later we shift our focus to more strategic issues, trends, and observations shaping the computing industry. In particular the value of data continues to grow almost exponentially every day. Data (and storage) have become the center of the IT universe, as the computers are now satellites to the data storage infrastructure. Let's review some key trends and observations (factoids) from a variety of sources that may help with storage and networking planning as the New Year begins.

The worldwide Gigabit Ethernet packet switch market reached over 810,000 ports shipped in 2Q 2000, according to Cahners In-Stat Group. The high-tech market research firm expects the overall Gigabit Ethernet switch market to reach almost 4 million units in ports shipped and $4 billion in end-use revenue for 2000, resulting in an all time high: over 200 percent greater than the total number of ports shipped in 1999.

According to a recently published 2000 report "High Availability and Data Protection Practices" from Strategic Research, by 2003, databases will control access to more than 65% of the network's shared data. This further drives the requirement for true heterogeneous and homogeneous data-sharing.

Jupiter Research has released a study with pulse-quickening projections about the growth of the business-to-business commerce market. Jupiter's prediction that B-to-B commerce will expand from $336 million this year to $6.3 trillion in 2005 is likely to raise the heart rate of even the most composed IT administrators as they try to figure out how to accommodate such exponential expansion.

However, Seagate Technology Inc. reckons this time it's got a larger lead on the competition, having taken the capacity on its Barracuda drives from 73GB to 180GB.

Europe's application service provider market will jump from a $275.1M market in 1999 to a $13.7B market in 2005, according to international marketing consulting company Frost & Sullivan.

The picture changed in 2000, according to Internet statistics published in Tele-Geography 2001, this year's compendium of industry statistics from the Washington, DC-based firm. From 1999 to 2000, Internet bandwidth connecting Asian countries grew faster than any other region-to-region route in the world--including Internet bandwidth connected to the U.S. That meant that 13.5 percent of Asia's international Internet capacity was in-region, up from 6.2 percent the previous year. The Asian Internet is becoming less and less U.S.-centric and regional interconnection is on the upswing.

Thursday, December 14, 2006

Easy, Affordable Home Networking from D-Link

Wireless home networking just got a boost. D-Link Systems shipped its new DWL-120 USB Wireless Adapters—the first of several 802.11b USB adapters expected to hit the market—then., shortly after, slashed prices on its entire 802.11b line. The price reduction makes D-Link's 802.11b networking solution more cost competitive than other wireless standards, such as HomeRF, and the addition of USB components simplifies setup.

The D-Link DWL-120 adapters are available separately ($99 street) or in the DWL-920 USB Wireless Kit ($399 street). The kit includes two DWL-120 adapters and the DWL-1000AP Wireless Access Point, a device that acts as a transmitter, receiver, and network manager.

Wireless networking is a boon to the notebook crowd. With a wireless adapter in your notebook, you can maintain a network connection as long as you stay within about 300 feet of an access point indoors or 900 feet outdoors. This enables you to share files, printers, and Internet access. D-Link's introduction of a USB adapter benefits users of desktop PCs because installation consists of merely plugging in a connector rather than having to open the computer's case and struggle to insert a network card.

HomeRF, the other wireless standard for home networks, held an advantage for those with desktop systems because the adapters connected via USB plugs and the components were less expensive. D-Link's recent announcements, however, have cleared these barriers to widespread 802.11b adoption. What's more, 802.11b is faster than HomeRF (11 Mbps versus 1.6 Mbps) and is already the undisputed corporate and campus standard.

Installing the DWL-920 was a cinch. The DWL-120 adapters, at 0.75 by 4 by 2.75 inches (HWD) and with a two-inch antenna on the side, required little room and simply plugged into available USB ports. Windows then prompted us to pop in the included CD and quickly installed the drivers. We restarted the computer (leaving the default settings) and were good to go. We tested the two adapters included in the kit using Windows Me and Windows 98SE. In each case, installation took under a minute. After rebooting, a network status icon shows up on the Windows task bar; green signifies a good connection, red means no connection, and yellow indicates a so-so connection. You can bring up a utility for changing settings, or leave everything at the defaults, as we did.

The 5.3- by 1.6- by 4.8-inch (HWD) DWL-1000AP access point was even easier to install than the USB adapters: we just attached the AC cable and turned on the power. The access point started working immediately with our adapters set to their defaults, and the PCs were instantly able to communicate with one another. To connect to an existing wired network, you plug an Ethernet cable into both the DLW-1000AP's RJ-45 jack and a hub or PC equipped with an Ethernet adapter. We plugged the DWL-1000AP into a hub connected to a router that was in turn connected to a cable modem. After rebooting our PCs we were able to access the Internet via the network within minutes.

You're probably better off ignoring the user manuals that come with the D-Link devices or just reading the first few pages, though. (The manuals quickly get into much more technical information than most people setting up home networks will require.)

BMC pulls the plug, competitors pick up the pieces - Storage Networking

BMC has announced that they will stop development on Patrol Storage Manager 3.1 (PSM), its flagship open systems storage product. Their decision sent a tremor through the storage industry, upset existing PSM customers and canceled BMC's OEM partnership with out-of-luck Invio Software, a storage provisioning company. It also played havoc with payroll: BMC laid off 3.5 percent of its workforce--232 employees worldwide, 104 of them from BMC's Houston headquarters. All of this happened in spite of the fact that, in 2002, analyst firm Gartner tapped BMC as one of the three top SAN management software vendors.

Why the decision, why now, and what does this mean for storage development in general? Storage management remains a strong market and most vendors, analysts and even customers agree that it's a growth area. But the storage management space has not achieved the overall revenue levels that the analysts had predicted for it. Small and midrange companies are continuing to carefully watch their budgets and primarily invest in storage arrays and tape, not in management packages. Large data centers are spending big bucks on storage and storage management, but they tend to prefer the blue chip status and lengthy storage experience of IBM, HDS and EMC to companies who are less known in the storage field. (This includes both well-known companies like BMC in the networking and mainframe markets, as well as storage start-ups.)

BMC's initial reason for braving storage in open systems environments is understandable. The Aberdeen Group's David Hill, research director of Storage and Storage Management, lists three of the critical areas of F's reason for existence: attempting to control storage management costs, protecting company data, and increasing productivity in the face of flat or shrinking budgets. Accomplishing this, especially at an enterprise level, requires a certain level of intelligence in data management software, especially in the storage management market. Those companies who can fulfill their management promises stand to gain a great deal, especially if the economy improves and business releases its stranglehold on F budgets

In fact, BMC still doesn't doubt that storage management is a growth market. Dan Hoffmann, director of marketing for BMC said, 'This was not a decision based on pessimism about the industry. BMC believes storage is a growth market. It's just that in a world of tough choices, even good products can lose when they have to make a painful choice." Hoffman noted that BMC is not dumping the product. "We have stabilized this product; we have no plans to enhance it. BMC values its PSM customers, and will support them over the next two years."

For network expert BMC, getting into open systems storage management was a gamble. BMC's traditional expertise is in systems and database management, represented by hundreds of different products throughout multiple product lines. Given its undoubted network management expertise, it hoped to apply the same level of integration, management and reporting to the storage networking environment. The company acquired Boole & Babbage's mainframe storage management tools in December 1998, re-engineered the mainframe tools to suit open systems, and announced their Application Centric Storage Management (ACSM) initiative in 2000. BMC was headed in the right direction: application-centric storage management improved on device-only management by policing application service levels. And the ACSM announcement trumped competing introductions of similar initiatives, which gave BMC a head start on convincing the market of application-centric storage's value. (In this schema, storage management and provisioning is based on ap plication data and service levels. It's often integrated with policy applications that can relieve manual provisioning pressures on F departments, which can be intense.)

However, by BMC's new fiscal year in 2003, PSM had not achieved the level of return that senior management wanted to see in a tight economy. The flagship storage management line took intensive development effort, and salespeople struggled to penetrate the ranks of storage customers. Senior management concluded that their development, marketing and sales investments in PSM would better be invested in their network management products, which were the primary source of BMC's revenues.