Sunday, May 16, 2010

What Does the “L” in PLM Really Mean?

n an earlier post, What Does the “P” in PLM Really Mean?, I discussed what the word “product” means in product lifecycle management (PLM). In this post, I am going to move onto the next letter, “L” for lifecycle.

According to Merriam-Webster, one definition of lifecycle is “a series of stages through which something (as an individual, culture, or manufactured product) passes during its lifetime.” In a typical manufacturing environment, these stages include conception, design and development, manufacture, and service. Ideally, a PLM system should manage the entire lifecycle that covers all the stages. Originally, however, the concept of PLM was designed to address product definition authoring and, later on, define data management issues for the design department. Not every stage receives equal attention under the PLM umbrella, and the application maturity of each stage is not yet at the same level.

Conception is the earliest stage of a product lifecycle. Within this stage, ideas are the raw input and development projects or tasks are the output. New ideas for product development come from different sources such as research work, through newly available technologies, brainstorming sessions, customer requirements, and more. Some of the ideas might be incorporated into existing products as new features; some might not be feasible at the moment; a large amount might simply be eliminated; the rest (grouped or alone) might become new concepts, and some of them might finally reach the development level after evaluation. Briefly, the conception stage is a process of idea attrition—only the good ones get to the next step. In this area, management applications are not quite mature and the adoption rate is relatively low. Part of the reason might be that conception is strongly associated with creativity, and people are not yet convinced that this can be handled well by machines.

Product design and development is the main stage where abundant product definition information is generated. When a concept becomes a development project, people need tools to define not only what a product should be (product design), but also how it should be manufactured (engineering design). Computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) are all well-recognized PLM tools that support the definition, as well as some execution processes. The adoption of PLM tools increases engineers’ individual productivity tremendously—but they also need a platform to collaborate internally (with peers and other departments inside the organization) and externally (with development partners, suppliers, and customers). The application of PLM for the design and development stage is the most mature. It is an exemplary approach for most organizations to start their PLM initiatives from this stage because it produces the majority of product definition information.

Manufacture is a joint task performed by enterprise resource planning (ERP), PLM, and other systems such as manufacturing execution systems (MES). ERP takes the lead from the planning and control angles, and MES manages and monitors the production processes on the shop floor. The reasons for having a PLM system in place at this stage are:

1. PLM provides information for what and how to produce.

2. Tight connection between PLM and ERP also helps companies develop better products that are produced in a better way.

Service includes marketing, sales, distribution, repair and maintenance, retirement, and disposal processes related to a product. The quality of these services relies on the accuracy, integrity, and timeliness of the product information that is provided. In general, the more complicated a product is, the more important it is to have the product information available for the mentioned service activities. Another reason for having a PLM system is increasing environmental compliance requirements. For example, at the time when a product enters into the last stage of its lifecycle, the manufacturer has to make sure that the disposal procedure can be handled properly so that the disposition has minimum impact to the environment—especially when it is an asset type of product that lasts years or even decades. Instead of hoping that the user will keep the manual shipped with the product, the disposal instruction has to be stored and managed securely somewhere within the manufacturer’s PLM system.

Above, I discussed product lifecycle stage by stage. However, the PLM methodology won’t reach its full potential unless you take a holistic view of all the stages. Although some stages mainly generate product definition information and others mainly consume this information, it is more appropriate to think of every stage as both the consumer and provider of product definition information. The reason for having a PLM system is to facilitate information-sharing. Thus, in theory, a comprehensive PLM must cover all these stages. In practice, the reality is that not all PLM solutions support the entire product lifecycle, and the priorities of managing different lifecycle stages are different. Nevertheless, managing the entire product lifecycle should at least be a long-term vision.

What Keeps EAM/CMMS Away From PLM?

Today, many assets are designed and manufactured with the help of product lifecycle management (PLM) tools and systems, which contain highly valuable product definition information for enterprise asset management (EAM) and computerized maintenance management system (CMMS) operations.

That being said, if there is a way to tie the two systems (EAM and PLM) together, the result will be beneficial to original equipment manufacturers (OEMs), asset owners, and third-party maintenance service providers. However, this isn’t an easy job. The following are a few barriers between EAM and PLM as I see it.

1. Two Different “Lifecycles”

Yes, the product lifecycle (PL) and the asset lifecycle (AL) are similar. Both of them may begin at a strategic starting point (e.g., PL begins with planning the next offering for the market, and AL begins with planning the next purchase of equipment for a factory) and finish at the disposal stage. However, the difference between the two is also obvious. Quite often, the concept of “product” in PLM doesn’t coincide with the physical asset within EAM. In PLM, a product is likely to be a model or a version, which may have multiple physical instances. In other words, in the development process, a product is not serial-number-sensitive in many cases. Even cases in which the product structure contains part serial numbers, usually a PLM system finishes its job at the stage when the product is built.

To EAM, the original configuration of an asset at the time when it is delivered is helpful, but asset maintenance history and current configuration are more important with the passing of time. One solution for this situation is to connect EAM and PLM to each other.

2. Integration across Organizational Boundaries

As discussed above, it makes sense for an EAM system to be able to retrieve product design information from a PLM system and for a PLM system to receive maintenance records from an EAM system. Actually, some PLM systems are able to manage as-maintained product configuration, which means that the systems are capable of creating and maintaining configuration information for each physical asset. However, there are remaining issues, such as how asset owners or maintenance service providers will be able to access the product information as needed and how well the configuration information will be maintained every time a maintenance task is performed—given that PLM systems and EAM systems belong to different owners.

The integration between EAM and PLM is not only a technological issue, but also a business issue. I’m wondering if the whole ecosystem has found a way to distribute fairly the responsibilities and benefits associated with the integration between the two systems.

3. Intellectual Property (IP) Rights

Before cross-organization integration becomes feasible, an alternative solution is to implement a PLM system on the EAM side. More precisely, a product data management (PDM) system will suffice for the requirements of better managing product/asset definition information, since EAM will be able to manage the rest. Disregarding the cost of embracing another system, this solution has another difficulty—the intellectual property (IP) issue. As an asset owner, even if you have an in-house PDM system, how likely is it that your equipment providers will share their detailed design information so you can maintain it in your own system?

Retrieving design information directly from an OEMs’ PLM servers also faces the same IP control issue, but it seems to be more manageable, since the OEM can have control over what product information can be disclosed and what procedures are used to authorize access to that information. When the capability of storing computer-aided design (CAD) models in a non-file-based way is ready (which means CAD models are collections of objects in the database rather than distinct files), secure and efficient IP control may become even easier. However, in my opinion, IP control over product design information in a collaborative work environment is a complicated issue that can’t be resolved in a short period of time.

These are the barriers between EAM and PLM that I’ve seen. A blog series by P.J. Jakovljevic addresses some of the challenges from the strategic service management (SSM) perspective. What I can leave you with is an imaginary scenario like this:

One day, when a maintenance request is submitted to an engineer at the asset owner, AO-1 Company, he searches for a resolution based on the description of the problem. Unfortunately, within either AO-1’s or the equipment provider OEM Company’s database, he can’t find a satisfactory answer. However, a search result about a different company—AO-2—seems very applicable. This search result is about a resolution of the same problem, requiring the replacement of a certain part, which unfortunately is not in the inventory at AO-1. The engineer then submits an order request to the OEM and is informed that it will take 2 days for the OEM to ship the service part. In addition, the OEM also notices that within AO-1’s part inventory, there is another part that can be used as a substitute after light modification, which can be performed on-site, easily and quickly. After the engineer chooses the second option, he receives a dimension specification for the modification and a 3D animation of how to replace the part. The engineer then is able to complete this maintenance task and log the information in the EAM system before finishing his day. The OEM’s PLM system then realizes that this has become a repeating problem and routes it to the design team…

If this scenario ever happens, I hope it is not in my dreams, but rather in, say, the clouds, maybe.

How Is a Bad Product Developed?

There are multiple answers for how a bad product is developed; many of them are rooted in myopia in the development process.

This morning, when I was leaving a subway station through a tunnel, a billboard caught my eye. Actually, at first glance, I was kind of scared by the weird eye of one of the women in the picture. A second look revealed that the weird eye was a bolt (on top of a washer) located very close to her right eye. Let me clarify that the bolt and the washer were physical items, not printed in the picture. The washer was slightly bigger than her eyeball, so when looking at the woman, I could see a beautiful left eye and a bolt and washer on the counterpart. Actually, there were a few bolts holding the transparent plastic cover over the picture. This very one just happened to be in the “perfect” position.

On the way to the office, my thinking continued. Had the graphical designer realized the existence of the bolts during the design phase, he/she would have avoided putting the woman in the wrong spot. Moving the woman just a few inches would have resolved this problem. That being said, this “bolt-in-the-eye” problem arose because of the disconnection between the graphical design phase and the installation phase. In other words, the graphical designer finished his/her work without knowing the situation in the downstream work and threw it over the wall, and the installation staff was just in charge of putting the printed art work in place with no responsibility for the quality of the final product.

Actually, this is quite a complicated situation. The advertiser, the design agency, and the operator of the billboard are probably three different entities. Collaboration across organizational boundaries is sometimes very difficult, especially for details such as the position of the bolts. However, as people say, “attention to detail makes all the difference.” To be able to pay more attention to detail, good collaboration during the development process is required, which is why a lifecycle view of the product is a must.

You could say that this is just a rare and special case, and collaboration might therefore not be worthwhile. However, after checking out the other two billboards of the same advertisement in the same subway tunnel, I have to say it is worth some collaborative effort, since the other two have the same problem as well.

This billboard case demonstrates the neglect of the installation phase during the design phase, or the disconnect between two phases in the lifecycle of a product. Disconnect may also happen between other product lifecycle phases as well. The following two examples are what I’ve found in our daily life.

Example 1: single-use plastic shopping bags

The single-use plastic shopping bag was a “great” invention. It is so economical and convenient that it may even be partially responsible for the rotten food in your refrigerator, because single-use plastic bags allow you to buy much more food than you would normally eat. However, when this product was first developed, I guess people were not thinking about the disposal issue at the end of the lifecycle of this product. To me, it is a sin if you create something without providing a proper way to dispose of what you create.

Example 2: cable ties used in consumer goods packaging

Cable ties are quite handy on many occasions. However, have you ever experienced cable ties that were difficult to deal with when you were trying to unpack a set of self-assembly furniture? In this case, the connection between packaging design and the first use of the cable tie (packing the goods in the factory) is there, since cable ties are an efficient and low-cost way to package goods. However, the connection between packaging design and the second use of the cable tie (unpacking the goods at a consumer’s home) is broken. Cable ties are “innocent,” but using them in packing consumer goods risks being a “design sin.”

Above are good examples of what will happen if you don’t have a lifecycle perspective in product development. Recent years have seen increasing adoption of product lifecycle management (PLM) systems. The beauty of the PLM approach is the holistic view of the entire product lifecycle. Even if it is not a good time for an organization to implement a PLM system, at least it can be PLM-minded.

Taming the SOA Beast – Part 2

Part 1 of this blog topic introduced the notion of how complex and tricky it can be to manage and govern enterprise applications’ service oriented architecture (SOA). That blog post also tackled Progress Software’s recent acquisition of Mindreef in order to round out its SOA governance solution for distributed information technology (IT) environments.

Mindreef joined the Progress Actional SOA Management product family that provides policy-based visibility, security, and control for services, middleware, and business processes. This acquisition continues Progress’ expansion of its burgeoning SOA portfolio and strengthens the company’s position as a leader in independent, standards-based, heterogeneous, distributed SOA enterprise infrastructures.

Prior to being acquired, Mindreef decoupled some plug-in features from its previously all-in-one SOAPscope Server suite.

One capability was SOAPscope Policy Rules Manager that tests compliance with rules such as whether the Simple Object Access Protocol (SOAP) and Web Services Description Language (WSDL) headers comply with the WS-I Basic Profile for Web services interoperability. Also, the feature checks whether the extensible markup language (XML) schema was formed properly, and whether the “contracts” between Web services are valid so that companies can ensure they won’t break at run-time because of faulty logic.

Another plug-in, called Load Check, provides a pre-test simulation of the system’s performance. The underlying idea was to mitigate the bad practice that, when developing Web services-based applications, the load or performance testing tends to be an afterthought that is often compensated for by purchasing extra hardware after the fact and at a hefty price.

Progress Actional + Mindreef

Like its parent, Mindreef has always designed its products as a good fit for third-party IT governance solutions, with the ability to check on whether Web services are well formed and remain consistent with business policies.

Progress does not release the number of customers it has for specific products or as a corporation, although it admits to gaining access to more than 3,000 of Mindreef’s customers at more than 1,200 organizations worldwide. The ideal customers for the combination of Progress Actional and Mindreef SOAPscope are those seeking full life-cycle quality management of their SOA environments, ranging from design through operational deployment.

Mindreef SOAPscope is a recognized testing and validation software product for SOA services at the design stage, while Actional is the market leading SOA management, validation and monitoring software for operational SOA. Thus, the combination of the two provides a solution that is likely to be the first in the market to address the entire SOA lifecycle with SOA quality, validation, and runtime governance.

Progress Actional and Mindreef provide a deep level of SOA management, testing, validation and run-time governance functionality, but not all organizations that have begun implementing SOA environments recognize the need to implement that functionality as yet. As a result, those companies that have felt the significant pain of having to diagnose why SOA composite applications have failed in order to get them rapidly back up and running, or who have discovered rogue Web services within their environments into which they have no visibility, should see the benefit of deploying Progress Actional and Mindreef.

Progress Actional and Mindreef are sold worldwide from offices in North America, Latin America, Europe, and Asia. A complete list of Progress Software offices is available here.

While hardly any player in the market currently has equal lifecycle SOA quality capabilities as the combination of Actional and Mindreef provides, traditional competitors for Actional include Amberpoint, SOA Software, IBM, Hewlett-Packard (HP), Layer 7 Technologies and Computer Associates (CA).

As for Mindreef, while it can also be hard to find a single product that functionally competes head to head with SOAPScope, some other vendors’ functionality is comparable to that found in SOAPScope. Namely, in sales situations, Mindreef sometimes runs across IBM Rational Software and HP/Mercury, and occasionally some of the smaller niche players like Parasoft Solutions, iTKO LISA, PushToTest, and Crosscheck Networks.

Forget Not about Oracle Fusion Either

The recent acquisition of the former middleware competitor, BEA Systems, has promoted Oracle into the middleware market leader, at least in the Java world. The idea behind the ambitiously broad Oracle Fusion Middleware (OFM) suite is the following:

  • to enable the enterprise applications’ architecture shift to SOA
  • to become a comprehensive platform for developing and deploying service-oriented enterprise applications
  • to form the foundation for modernizing and integrating the burgeoning Oracle Applications portfolio

Taming the SOA Beast – Part 1

Certainly, I admit to not being a programmer or a techie expert (not to use somewhat derogatory words like “geek” or “nerd”) per se. Still, my engineering background and years of experience as a functional consultant should suffice for understanding the advantages and possible perils of service oriented architecture (SOA).

On one hand, SOA’s advantages of flexibility (agility), components’ reusability and standards-based interoperability have been well publicized. On the other hand, these benefits come at a price: the difficulty of governing and managing all these mushrooming “software components without borders”, as they stem from different origins and yet are able to “talk to each other” and exchange data and process steps, while being constantly updated by their respective originators (authors, owners, etc.).

At least one good (or comforting) fact about the traditional approach to application development was that old monolithic applications would have a defined beginning and end, and there was always clear control over the source code.

Instead, a new SOA paradigm entails composite applications assembled from diverse Web services (components) that can be written in different languages, and whose source code is hardly ever accessible by the consuming parties (other services). In fact, each component exposes itself only in terms what data and processes it needs as an input and what it will return as an output, but what goes “under the hood” remains largely a “black box” or someone’s educated guess at best.

Consequently, SOA causes radical changes in the well-established borders (if not their complete blurring) of software testing, since runtime (production) issues are melding with design-time (coding) issues, and the traditional silos between developers, software architects and their quality assurance (QA) peers appear to be diminishing when it comes to Web services.

Transparency is therefore crucial to eliminate the potential chaos and complexity of SOA. Otherwise, the introduction of SOA will have simply moved the problem area from a low level (coding) to a higher level (cross-enterprise processes), without a reduction in problems. In fact, the problems should only abound in a distributed, heterogeneous multi-enterprise environment.

Then and Now

Back to the traditional practices and mindset: the software world considers design as development-centric (i.e., a “sandbox” scenario), and runtime as operation-centric (i.e., a part of a real-life customer scenario). But with SOA that distinction blurs, since Web services are being updated on an ongoing basis, thus magnifying the issues of recurring operations testing and management.

Namely, companies still have to do component-based software testing (to ascertain whether the code is behaving as expected) at the micro (individual component) level, but there is also application development at the macro (business process) level, since composite applications are, well, composed of many disparate Web services. In other words, programmers are still doing traditional development work, but now that development work becomes involved in infrastructure issues too.

For instance, what if a Web service (e.g., obtaining exchange rates, weather information, street maps information, air flight information, corporate credit rating information, transportation carrier rates, etc.), which is part of a long chain (composite application), gets significantly modified or even goes out of commission? To that end, companies should have the option of restricting the service’s possibly negative influence in the chain (process) until a signaling mechanism is in place, which can highlight changes that may compromise the ultimate composite application.

SAP Backpedals Its SaaS Forays — By Design or Under Duress?

Let me start this blog post with a huge disclaimer: I have no intentions of wilfully beating up on SAP whatsoever!

Sure, the enterprise applications titan has lately been embroiled in an intellectual property lawsuit with archrival Oracle over improper use of support data through its TomorrowNow third-party support (recently discontinued) subsidiary.

As if this wasn’t enough, SAP is being sued again, and this time over an allegedly failed software implementation. Namely, in late March, Waste Management Inc. filed suit against SAP with claims of fraud (or gross over-promise, if one wants to sound a bit gentler here).

The plaintiff company claims to have spent a whopping US$100 million fortune implementing SAP’s software to run its business out-of-the-box (i.e., without any costly and pesky customizations). Since the software was allegedly a “complete failure”, the company is seeking expenses plus additional damages.

And it is all coming down in the midst of internal changes and reshuffling in SAP’s leadership and some high-profile staff departures. There are also indications about a not so smooth integration and assimilation of recently acquired Business Objects.

And yet, we’re a long way off from reading SAP’s obituary any time soon. SAP is the market leader for a reason, and I have a great deal of respect for its team and its ability to weather such storms.

It is not my intention either to crow over SAP’s apparent hiccups and acknowledged delaying (with decelerated investments) its much publicized software as a service (SaaS) offering called SAP Business ByDesign [evaluate this product].

SAP Business ByDesign is SAP’s first major foray into on-demand software delivery, whereby the company hoped to open a new market for its applications. Prospective customers would be companies that cannot afford its high-end applications, SAP Business Suite [evaluate this product], but which require more sophisticated software than its small business offerings, SAP Business One [evaluate this product].

The enterprise resource planning (ERP) giant had quite ambitiously hoped to attract 10,000 users and US$1 billion in revenue with SAP Business ByDesign by 2010. In addition to its commitment to SaaS, the on-demand product represents SAP’s indisputable commitment to the mid-market. In 2007, SAP set the lofty goal of 100,000 total customers, also by 2010. Growing from its current base of 35,000 will naturally require that a sizable number of small and midsized businesses join the fold.

But SAP’s outgoing chief executive officer (CEO) Henning Kagermann recently told investors that the company was now unlikely to hit that target. One would think that such a renowned (and regimented) company would have first conducted a little more research into what customers and partners really wanted, before chancing its neck and so publicly “betting the company” during the on-demand product launch and fanfares in the last fall.

And apparently the much-discussed ambitiously comprehensive on-demand feature list now looks somewhat incomplete, making some observers to be unsure whether it’s all more about vaporware or vapor-demand. The official SAP party line, according to the related official information extracted from the Q1 2008 earnings press release, can be seen below and you can draw your own conclusions:

What Brings Customers Closer to Your Product Development?

In general, PLM provides a management framework that brings customers closer to product development due to the collaborative perspective of PLM systems. However, the traditional PLM way of including customer input is usually as an inbound task, which means customer input is manually collected and then entered, or recorded by another system and then imported, into the PLM system as “customer requirements.” This approach does help to a certain extent, but it also has some drawbacks in terms of the timeliness and accuracy of the information—due to the indirectness of customers’ involvement.

Another challenge is that customers may not be able to understand your design ideas accurately or express their feedback clearly if they don’t see the “exact” product. A customer may point to a certain spot on your clay model and tell you what he/she feels should be improved, but he/she may not be able to do the same thing using text descriptions, sketches, and two-dimensional (2-D) drawings. Good communication has to be bidirectional and based on comprehensive but explicit information.

In addition, when you move from direct customers to finer granularity (e.g., product users and consumers), the complexity of customer involvement will increase. For example, if you are a passenger aircraft manufacturer, you may have tens of airlines as direct customers who can contribute to the development of your products. But, you also have hundreds of pilots, hundreds of maintenance engineers, thousands of flight attendants, and millions of passengers who may provide valuable input for either the operability, maintainability, or comfort of your products.

So, it seems that there is more work to be done in the PLM arena in order to provide better capability of including customer input in design. In my understanding, the following two technologies address the above challenges very promisingly.

3-D Visibility Down to Earth

As mentioned earlier, text, sketches, and even 2-D drawings are not perfect vehicles for exchanging product definition information. Compared with these formats, three-dimensional (3-D) models have the benefit of containing both the explicitness and richness of product information. However, 3-D has had its own disadvantages in the past; it’s expensive and resource-consuming.

3-D viewers (for example, Oracle AutoVue, PTC ProductView, SESCOI WorkXPlore 3D, to name a few) created the first wave of bringing 3-D models to a wider range of audience by displaying and operating (in a certain way) 3-D models without authoring applications. More recently, technologies such as the 3-D portable document format (PDF) (click here for an example of a 3-D PDF from Adobe) and the 3-D spreadsheet (e.g. Lattice3D Reporter) have added more convenience in 3-D visibility. Meanwhile, some 3-D computer-aided design (CAD) vendors now also provide lightweight 3-D applications (e.g., Dassault Systèmes 3DVIA Live) for online collaboration purposes. In addition, there are free 3-D modeling applications available for the consumer market (e.g., 3DVIA Shape, also from Dassault Systèmes).

Even better, 3-D can now go mobile. When I was at Dassault Systèmes DEVCON 2009 two weeks ago, the development team showed the 3-D capabilities (displaying 3-D models) of an iPhone. If you want to know more detail, this blog post gives an interesting example showing how convenient mobile 3-D can be for consumers.

What Brings Customers Closer to Your Product Development?

Bringing all product stakeholders in a tighter loop within the entire product life cycle is one of the main strategies of the product lifecycle management (PLM) methodology. Following this idea, letting the customers (those who pay for and/or use the product) get involved as early as possible in the product design and development phases provides many benefits, including: more ideas for innovation, less design rework, higher customer satisfaction, shorter time-to-market, and more.

Today, including customer inputs in the design process is not only a theory, but also an increasing requirement from PLM users. Based on statistics from the TEC PLM Evaluation Center, among 50 possible business objectives for implementing a PLM system, the option of “including customer input in the design process” changed its ranking from 28th (in the year 2007) to 20th (in the year 2008) (see figure 1).

AGPL v3 Touches Web Services

The Free Software Foundation (FSF) issued a press release on its newly published Affero General Public License (AGPL) version 3. This license affects the modification and distribution of software oriented toward Web-based services.

The popular adoption of Web-based applications as an alternate to in-house software implementations has meant that free and open source software developed for web-based usage can be picked up by companies outside of the ones that originally developed the software, modify it, and foist it upon the world as a new business without necessarily contributing the modifications back to the project. That is a bone of contention for many.

Last year, Tim O’Reilly posted about open source architecture in the context of Web 2.0

“…in the PC era, you have to distribute software in order to get other people to use it. You can distribute it in binary form or you can distribute it in source form, but no one escapes the act of distribution. And when software is distributed, open source companies have proven that giving access to the source makes good business strategy.

But in the world of Web 2.0, applications never need to be distributed. They are simply performed on the internet’s global stage. What’s more, they are global in scope, often running on hundreds or thousands or even hundreds of thousands of servers…”

From the developers’ perspective it means that you may have a competitor profiting from your work without the mutually beneficial reciprocity normally expected via FOSS methods. Suppose your company develops a web-based CRM application. Your strategy is to release the code under a FOSS license because you believe that the community you organize around this software will enable you to improve more quickly and with greater innovations for your customers (or maybe you have some other reason–doesn’t much matter what it is). Some other companies pick up the code, set up their businesses to provide the same services through the Web, as would be expected, and in so doing modify the code in some nice ways. Note, software is not distributed itself, it’s delivered as a web service.

In other words, the clients of the secondary companies benefit from the modifications and the code your company originally developed but the “rising tide” effect of FOSS development is neglected. As a Web-based service, the companies providing their own CRM service with the application aren’t required to release their modifications back to the community, so the company that originated the application wouldn’t benefit from the FOSS community in the same way that the non-Web-based app developer would. That’s a potentially big minus in incentives for developing web-based apps, FOSS-style.

If I understand correctly, the AGPL changes that situation. It puts web-based software usage onto a footing, which is more parallel to installed software usage. The AGPL is essentially the same license as the most recent GPL (with its patent-oriented concerns, etc.) except if you look particularly at section 13 you’ll notice that it requires offering the application’s modified code to people that interact with the application over a network. So this returns the ability to access, modify, and further distribute the code of even Web applications.

As this license gets used, I bet we’ll hear of unexpected uses for applications that were intended as web-based services, but that get implemented inside of a company instead (just because the code was available).

Web 2.0 — “Wow!” or “So What?!”

Another buzzword (albeit not another three letter acronym [TLA]) that has slowly (or not) but surely crept into our collective mind is certainly Web 2.0. Although there have been some attempts at defining the term, such as at Wikipedia, ZDNet or TechTarget (and there are also some noble attempts of ZDNet bloggers, such as Richard MacManus or David Berlind), it is most likely that 10 different folks will provide 10 different interpretations (albeit most of these will revolve around mentioning wikis, blogs, AJAX, mashups, JavaScript, podcasts, social networking and so on).

Generally, I would venture to say any website that uses a little more interactive and dynamic technology (i.e. not just publishing “flat” HyperText Markup Language [HTML] pages) and supports some kind of online commerce, community, or other value-added activity that is enabled by the network would have Web 2.0 traits. But, is it still more buzzword than anything else, and is it being used to put “lipstick on a lot of pigs” even now?

Or, is Web 2.0 a genuine set of technologies that can even provide the “richness” of traditional desktop applications (read Microsoft Office) to the Web-based applications, without all the price and/or performance pitfalls/traps that are often associated with Office Business Applications (OBA)? At least we need to keep a close eye on how the next generation of office workers are using social networking sites/communities like Tagging, Facebook, Twitter, Instant Messenger (IM), etc., as they can give us a clue how effective collaboration should be driven into next generation of enterprise applications (of course, provided the security and privacy standards have been met).

The bloggers and market observers have certainly been buzzing about the advent of Web 2.0 for some time now, with recent discussions going in the direction of whether it is really a big deal after all (i.e., whether it should rather be called Web 1.1 or it has already earned the 3.0 designation) and whether the venture capitalists (VC’s) are slowly reaching the disenchantment stage (possibly similar to the dot.com era at the turn of the century).

In any case, while there is no debate that Web 2.0 has penetrated even the corporate world to a degree (as seen with wikis), and especially in some consumer-oriented front-office applications, my interest here is rather the importance of Web 2.0 deployment within the traditional enterprise applications.

Namely, when a leading enterprise applications planning (ERP) vendor announces that its product suite is Web 2.0-enabled (or compliant), as in recent cases of press releases (PR’s) from Oracle (evaluate its flagship product) and SAP (evaluate its flagship product), how should users react? Should they swoon in excitement or largely yawn and remain indifferent? In other words, during selection questionnaire/request for information (RFI) documents’ creation, should these capabilities be rated as “must have”, “nice to have” or something else?

Another phenomenon I’ve long noticed regarding the enterprise application space is the seemingly large disconnect between vendors’ (and analysts’) hype for cutting-edge applications/product modules — and the general market’s preparedness to embrace the new technology. It seems like there’s a several year lag before customers are ready to evaluate new products’ capabilities. Is there even a benefit to being a “laggard” vendor in each new application area/technology whiz-bangs (e.g., product lifecycle management [PLM], business intelligence (BI)/analytics, Software as a Service [SaaS], Web 2.0, service oriented architectures [SOA], etc.) to avoid some of the learning curve, and better address real needs of the market, when the time is ripe?

Well, the fact is that traditional (pre-Web and Web 1.0) enterprise applications were limited in a couple of ways. For one, any kind of processing that had to be run on the server side (e.g., a user enters data as to apply for a loan from the bank) required the user to input data, which would then be sent to the server to process it, and only then to return a new generated page to the user with the results. Also, the user interface (UI) elements were of quite limited interactive capabilities and user friendliness. Such limitations could in the past be overcome in various ways (workarounds):

  1. Some innovative vendors have solved the problem of server processing by dividing the screen into sections, so that only one section would send all the data to the server. Thus, this was the only section to be reloaded, while the other sections would obtain the data from this one. These vendors were real pioneers by inventing solutions that Asynchronous Java and XML (AJAX) has subsequently addressed, albeit a few years later;
  2. Also, if the data from server were not needed to process certain information, then the processing could be done on the client side via JavaScript programs; and
  3. JavaScript was also used for more interactive UI applications.

The advent of Web 2.0 has largely solved the above shortcomings in two ways:

  1. AJAX allows the capability to send only a part of the data from the web page to the server for processing without the user having to leave the page. This ability has enabled web applications to function like desktop applications; and
  2. Many companies, groups and/or associations have written libraries of java scripts (e.g., the Dojo framework) and since JavaScript and web browsers have meanwhile become much more powerful, such a combination has brought out a powerful “rich” UI, which has also converged the functionalities of web and desktop applications. The best example of this is when, e.g., in Google Search a user starts to write a sentence, the engine captures keystroke events, and after every keyed letter, it sends to the server what has been typed so far. Then, the server sends back the most popular phrases (and links) so far that start with such a combination of letters (the “courtesy” of AJAX), while, underneath the search field the UI shows those phrases (the “courtesy” of advanced JavaScript) so that the user can at any time select (the most suitable) one.