dinsdag 17 november 2009
Oracle Business Process Management Suite, the best of both worlds
This is a joint effort post from the Dutch Capgemini Oracle Technology team Léon Smiers, Ruben Spekle and Arjan Kramer on Oracle Business Process Management Suite.
Since the acquirement of BEA there has been a lot of discussion with clients around the topic of Business Processes Management or BPM. Oracle has now got two strategic components in this arena. Firstly the already existing component BPEL Process Manager (BPEL PM) in combination with the OEM-ed product Aris (Oracle BPA), and secondly the former BEA Aqualogic BPM product, now called Oracle BPM. On Oracle Open World, last October Oracle announced the unification of both products. What is the impact of this unification and when will this take place?
Oracle placed BPEL PM in its stack with the acquirement of Collaxa some years ago. In the years after the BPEL PM product was enriched with adapter functionality and a Human Task component. BPEL PM is aimed at the more low level process integration which is also known as orchestration. Because of this, designing a BPEL application is a rather technical process. For Business Analysts BPEL is too technical. In order to fill the gap for designing Business processes Oracle OEM-ed Aris and enabled the creation of BPEL models based upon Aris Business Process models. Aris is the de facto standard in the market for Business Process Management Modeling, and contains a huge stack of reference (industry) models.
In BPM creating Workflows is much easier, a Business Analyst models all the processes and creates the lines in between the process steps, based upon which run time components are created automatically. It is a complete framework for creating workflow type of applications. Besides the ease of creation of the workflow processes, BPM is very strong in the build-in mechanism for simulating processes. The designer (Business Analyst) sets timings for each step in the process and can simulate the process. Based on this simulation bottlenecks, like human tasks or asynchronous services can be easily spotted already in a design time stage, without even involving real services yet. During the runtime of a business process, audit information is captured and can be fed back to the process to be able to tune the process based on runtime information as well. This means a real roundtrip lifecycle for business processes. A third strong point is the versioning capability of components (processes, variables) within BPM which allow current versions to run in parallel. Oracle BPM hasn’t (yet) got as many adapters as BPEL, but integration with the use of Web Services works very well.
On Oracle Open World (October 2009), Oracle announced BPM, BPEL PM and Human Tasks will be moved into one Business Process Management Suite. During OOW there were possibilities to get hands-on with this new stack which looked really promising, the same excellent process definition functionality BEA Aqualogic BPM users were used to, extended with integration with BPEL, Human Tasks and other service components all combined in one single stack. In this process of unification predominately the BPM part will be changed. BPM will be based upon the BPMN 2.0 standard, the design time moves towards to both JDeveloper (the implementation part) and the Browser (Design time) and the run time will be based upon a shared runtime environment with BPEL PM. The BPEL PM product on the other hand stays rather untouched in the unification process.
In this architecture BPM, BPEL, Human Tasks and Aris support the following functionality:
BPM supports the Choreography type of functionality, and is more aimed at Business Analysts.
BPEL PM supports Orchestration, low level process integration, and is more aimed at technical people.
Human Tasks support the workflow component of a process, whether this is started from within BPM or BPEL PM.
Aris is used by Enterprise Architects and Business Analysts on a high level.
Unfortunately Oracle does not provide any dates when a version will be released and what functionality it will contain, but we can be convident that it will be available somewhere in the first half of 2010. In the meantime we still can use both separate Process Management solutions, with a preference for the BPEL over BPM, because of expected minor migration complexities towards future releases.
Léon Smiers is Oracle Solution Architect at Capgemini, you can follow him on Twitter
Arjan Kramer is leading an Expert group of Java based Integration Technologists and is thoughtleader on Oracle Fusion Middleware at Capgemini. Follow his thoughts on Twitter
Ruben Spekle is Oracle Solution Architect within Capgemini, you can follow him on Twitter
Soon to be posted on the Capping IT Off blog
maandag 24 augustus 2009
Composing applications like Lego, SCA in a nutshell
Composition is all about combining functionalities exposed from different sources. Up until now SOA hasn't brought us the salvation of combining functionalities from disparate sources. There may be help on the way. Service Component Architecture or SCA offers a simple application model for wiring all functionality together.
Composition is about combining functionalities exposed within an application or from external sources. How is this implemented in programming languages:
Combining functionality within an application is standard practice in languages such as Java and .Net. The functionalities exposed and reused are fine grained, there lots of calls between objects and the communication is based upon a method type of call.
Remote invocations in the same language, from outside the applications, is hard to implement, it is error prone and mostly a lot of time is lost due to all connection and protocol complexity.
Crossing the border in a mutiple language environment calls for the SOA neutral language approach, typically done with Web Services. The usage model of a SOA type of call is completely different then in the case of combining functionality within an application. The exposed and reused functionalities are coarse grained, mostly the amount of calls between services is low and communication is based upon a document type of call. The implementation of Web Service invocation and handling, even though standard frameworks exists are still time consuming and all the error handling needs to take place within the application.
The complexity gets harder when the same service needs to be exposed with multiple communication protocols. Making a unified communication model based upon Web Services is unfortunately not the answer, due to the different usage model of the SOA and low level application type of calls. Web Services used for low level application calls, for instance, are killing for performance.
Service Component Architecture (SCA) offers an implementation model for combing functionality, from internal and external sources, into composites. It offers an environment which takes care of all the communication, or wiring, between the services. SCA is limited to combining application or data logic but is not aimed at the presention layer and the data persistency layer. SCA is a standard defined by the OSOA organization, and is backed up by the vendors IBM, SAP, Oracle (inclusing BEA), Iona and Sun.
Without going into detail here a short summary of SCA.
SCA provides four basis building blocks, Services, Components, Composites and Domain.
Service, Components and Composites are part of SCA Design Time environment, the Domain is the SCA run time environment.
The basic building block is the Component, this holds and implements functionality, which can be exposed via Services. SCA does not deal with with the programming part of the component, it only handles combining functionality together. The only thing which a developer needs to do is state in the component (for instance a Java Class) that at a certain point in the application external functionality is needed, this can be done via a Reference name added as an Annotation in the application.
The Composite is a virtual container (just an XML file) where Components are defined and wired together. Policies can be added to Components when specific requirements on the usage or communication are needed, for instance security or reliability.
SCA enables building applications easily just like combining Lego blocks. Components can be combined into composite. Composites can be reused into new composites. But be aware that the the chain is as strong as its weakest link. If anywhere in the composite there is component that is badly tuned, it affects the composite as a whole. Composites should therefore be accompanied by SLA's, what is expected of the Composite.
The run time environment of SCA is the Domain. Multiple composites can be contained into the Domain. The Domain takes care of all the communication handling and the policies. In contrast to the SOA principle of being vendor neutral, a Domain is based upon one language. This is based upon one of the lessons learned from SOA, that vendor neutrality for fine grained services is overkill from a design perspective and on run time can be resource intensive which can cause performance problems.
SCA uses a very pragmatic approach, when services and clients are written in the same language, there is no need for a language neutral representation. SCA handles all the communications between the objects, or wiring, in a way that is best suited for the language and the environment. When dealing in a hybrid distributed environment the communication is handled via Web Services.
What are typical use cases for SCA
(1) Data composites, in a SCA Data Composite, data coming from different locations can be combined into one composite and the composite returns the combined set.
(2) Application functionality Composite
(3) Process composites
SCA will be able to fullfill the promise of SOA as long as the basic principle, simplicity, will be preserved.
zondag 12 juli 2009
The risks of the SOA EDA promise, a collapse of the power grid of applications
Looking at history, where networks are created and can collapse, we need to learn in IT what the risk are of our attempts to unite the IT landscape with Service Orientation (SOA) and Event Driven Architecture (EDA). Examples in the financial world and the utility market should be a warning for IT of what could hit us. This calls for craftmanship of people in our profession who have the ability to look cross borders and control by means of Governance, in Design AND in Run time.
One example of a collapsing network is the current ongoing subprime mortgage crisis which still creates waves of unemployments, closing of companies an costing a lot of money. It is caused by globalization of the financial world, a would be believe that house prices would never go down, the growing dependancies of markets and the lack of control. At one point is became clear that the fundament of the economic rise, the ever increasing house price, was not a certainty anymore, and the network collapsed.
Another example is in the utility market. On august 14th, 2003 a disruption of the energy grid caused a blackout in the North Eastern part of the US and Canada, and hit 10M people. It started as massive power fluctuation in New York state and, in no time, affected all the areas. Here one flaw in one part of the energy grid caused the collapse of a very large part of the energy grid.
In IT we come from silo based architectures, every functionality (or set of functionalities) is contained in one silo, which may have integration points with other parts in the enterprise but stands essentially on its own. When one of these silos goes down it can cause a major problem for the enterprise, loss of sales, the going down of an essential production process, and in the end money and reputation damage. But at this point, it would NOT cause a cascading domino effect on other systems or enterprises. The promise of the SOA and EDA is the network of applications where everything can be connected and reused. Especially the reuse part can be the risk part, reuse with no insight in the relations can cause Single Points of Failures.
We are creating the power grid of applications for the future, that makes us responsible for making the right choices now.
One of these choices is to know the dependancies of the system components, for both in Design and in Run time.
At Design time, we need to be able to quickly identify all the dependencies in our landscape. This includes both the application types like custom build (based upon all types of technologies), package based, legacy and integration components. With the ultimate wish to be able to create an impact analysis list based upon all the touched components with just one push one the button. Another decision which is made at Design time, is whether we´d like to reuse an existing component, elsewhere in the enterprise landscape or even outside, and what the impact is if that (reused) component goes down. Based on that risk analysis we can decide whether to reuse that component or build it ourselves.
At Run time we need to have insight in the inter relations between all deployed components, and identify possible risk areas, for instance a service which is used by an ever increasing amount of applications.
Where should we start? In most environments people start of with Excel sheets, which in no time are not usable anymore because of the complexity, which calls for the need for proper Governance tooling (or at the basis a dependancy tracking tool). When you start with a Governance tool the environment is, of course, empty. Loading all the components and dependancies in your environment is time consuming and costly. This is the major problem why dependancy tracking tool, (SOA) Governance tools and Service Repositories are hard to start with in enterprises. The Business case at start is way too costly, even though we all know that on the long term it is essential for getting a grip on you IT environment. A diffentiator in this market will be the option to load an existing environment into the dependancy tracking tool repository.
vrijdag 12 juni 2009
The last piece of the process integration puzzle, looking at the UI part
Process integration is a subject that always comes back when you're defining a Software Architecture. There are different types of of Process integration, based upon integration partners involved, integration types, amount of users involved, connection type, length of the process etc. In the past loads of books have been written about process integration and a lot of patterns have been defined.
When looking at the Application to Application (A2A) and Human to Application (H2A) type of processes cross technology solutions have been defined in an XML type of language BPEL. BPEL is the conductor of a process and does call-outs to functionality by means of Web services and adapters to legacy systems.
The ease of functionality that BPEL offers for process integration however is not very usefull for a UI type of process like a close-out sequence of a book ordering process on the Web. These are typically hard-wired within UI frameworks, like in Java we do (or used to) in Struts or JSF. In these solutions there is no flexibity or reuse. Rearranging a UI process requires rework and redeployment, and on avarage there's not so much reuse of UI process components. When looking at a SOA perspective, you'd like to reuse (groups of) pages and handle them as services in order to create composite applications.
There is some movement in this market. Oracle came up in de Fusion Middleware 11 version with a UI Taskflow mechanism.
The basis for the Oracle Solution is Java Server Faces, or JSF, and is called ADF Taskflows. Components in these Taskflows are complete pages (JSF), page fragments (parts of JSF), Java call-outs and the most important one, the Taskflow definition. This Taskflow definition resembles the BPEL type of working, and is conductor using for handling a process from a UI perspective. It uses a plain XML file referencing all the needed components, or services.
Two types of Taskflow exist, the unbounded and the bounded taskflows.
Unbounded taskflows are just a predefined set of pages used to complete a task, this taskflow can be entered from all pages within the taskflow.
The Bounded taskflow acts more like a Service, only entry one entry point exists and zero or more exit points which can result in an output result. The inner behavior is 'private' and cannot be changed or interfered by the outside world.
Bounded taskflows are the most neat ones, I've been searching for these for quite some time now. Finally the possibility to reuse a group of pages within a new composite.
I hope the market picks up this initiative and creates a cross technology integration language for UI based process integration, just like we have now with BPEL.
For more information see
zondag 17 mei 2009
The cost of dead code
Once in a while I see this situation on the street. Thrash alinged around a garbage bin, somebody did an attempt to trow away his thrash but was too lazy to finish the job. 'Close but no cigar', so somebody else has got to finish the job, which in essence costs energy (and money).
In my job as a Software Architect I come across this behavior also a lot of times. Once in a while I've got to dust off an old application and investigate what the possibilities are for modernization or revitalization. Modernization ot revitalization comes down to, what can we do the extend the lifetime of an application for a couple of years more, and also be able to add agility, stability and new or changed business requirements to the application. It also helps identifying the roadmap for the future of the application. To make the roadmap for modernization there is a list of things to investigate, size of the application, the context in the application, complexity of the code, (future) business needs but also the amount of ´dead code´ in the application.
What is dead code? At one point in the history of the application functionality was removed or changed, but the actual code itself is still available in the application. For instance, a procedure which is not called anywhere in the application, a page which is removed from the menu, old functionality which is commented out, database objects that are not used anymore but still contain data etc etc. What you see here is scared or lazy behavior, scared as in 'I'm not sure if I removed or changed the right code', lazy as in 'I don't want to spend too much time'. The result is that the application is becoming like a jungle, you get lost in the dead wood. Making a change is getting more and more costly as a result of the increasing dead code. So let's change our behavior in the maintance phase, if functionality is not needed anymore just throw it away, like you would do with thrash in a garbage bin. When a proper versioning system is set up, you always can find the history back.
maandag 27 april 2009
The seven challenges for Oracle
Oracle is moving towards the biggest provider of all software in the market, by means of smart acquisitions and a simple aim what they want. The aim of Oracle is to be bigger than SAP and IBM. I identify seven major challenges for Oracle the coming years. All challenges need to be guided by a sound roadmap towards the future.
What are the challenges for Oracle (and therefore also for Oracle partners and clients) the coming years:
(1) Integration of all products
Of course this is a no-brainer, and is a tough job. The roadmap helps in identifying integration points between the products to be integrated and still enabling growth for these individual products in parallel.
(2) Offering a roadmap towards the future
Last year, at the first of Juli, Oracle presented the acquisition of BEA with an excellent webcast, showing the preferred product stack for the future. Also included was a roadmap for the 100 days following the webcast which contained integration of some major BEA components into the Oracle stack. The 11 version of the technology stack would be a new milestone in the Oracle technology stack, but still is not in production and not clear what the choices are.
What I'm missing now is a decent roadmap for the future. Being a solution/software architect I need to know what is happening the coming years with a product stack when I propose a solution to one of my clients. This roadmap should contain product information, timelines and migration strategies for keeping track with the technology stack. Important is that the roadmap should not start from today, but also contain all 'old school' products of Oracle.
(3) Keep the 'old' Oracle customers on board
The impact of this fast change is that all existing clients (for instance 65K Database clients in EMEA only) see Oracle speeding away over the horizon. With the help of the roadmap we can help them move in a controlled pace towards the future, and not only for technology sake, but adding business value to their landscape.
Oracle should invest in migration tools helping the customers move towards the new technology stack. And not, as I've heard a product manager say, well open the 'old' and 'new' environment on two separate PC's and start typing... ;)
(4) Challenging other database products
Oracle is the market leader in the database area. This area however is getting commoditized more and more. Electricity was once a specialized product, but now you just get it 'out of the wall'. Speed, stability, availability and storage costs are an issue in the Database market, not extra functionality anymore. Since software is slower then hardware maybe Oracle should create hardware based Databases, on the new acquired Sun systems, with extreme speed and cheap in price. Another option would be offering the Database available in a Cloud.
(5) Challenging Open Source
In the Open Source market, more and more products are standardized and growing in experience. Products like MySQL and JBoss have the potential to be used in all types of core applications. One of the biggest advantages Oracle has over Open Source is that they can provide support and the likelihood that they will keep on existing during the lifetime of an application, and longer. Again here the roadmap is essential in providing information to clients when (not to) choose for Open Source.
(6) Challenging IBM
There's one thing where IBM beats Oracle, and that's architecture. IBM however has also the same problem as Oracle with their technology stack. They've got an extensive product stack with, not always, the best alignment among product components.
Oracle has got the opportunity to shape out a new landscape that is based upon a strong architectural base and is in complete alignment. The roadmaps shows the speed of shaping out this new architecture.
(7) Challenging SAP
SAP beats Oracle with Industry solutions. But with the introduction and work with the Oracle Application Integration Architecture (AIA) Oracle catches up with SAP. Oracle asks partners and clients helping with creating new AIA industry patterns. Here the roadmap can show the speed of development of patterns across industries. Advantage of Oracle over SAP in this arena is the momentum of development of the new patterns. SAP is struggling now with implementation of their SOA strategy and wether the choice of Java in their environment was a good one.
Of course Oracle can't do this on their own, but with the help of partners and clients we can come a good way, by helping creating a sound roadmap which will lead us the coming years in the Oracle world.
Thanks to Joost van der Vlies and Ruben Spekle
woensdag 15 april 2009
The four ‘FIT’ factors, making a Business Case for Open Source and Vendor products
Open source is hot in the market. It is free and you don't have vendor lock-in. True, a landscape based upon 100% dependency on one vendor can cause situations where your business wishes are not in agreement with the architectural possibilities (or pricing strategy…) of the vendor products. So are we better off with Open Source, where unlimited developers seem to be working day and night? As always is the answer, it depends...
So how can we decide whether to choose for Open Source or Vendor based technology?. This can be based on four ‘FIT’ factors: Functionality, Technology, Pricing and Support.
First of all, the Functionality fit has to be determined. Fit for functionality is the acceptance factor of the end users. Ability of the technology to (rapidly) adapt to functional changes also has to be checked. And the expected life time of the needed application has to be determined. How well can the needed functionality be mapped on the features of the software.
Then looking at the Technology. What is the needed stability of the application (this is related to support). What is the track record of the technology in this area. How easy can new technology be added to the stack and combined with other components. How much knowledge is available in the market. How open is the technology. How proven is this technology.
Pricing is of course also an important component. The pricing component not only consists of license- and support fees, but also development costs, and needed personnel after go-live to support the application.
Support requirements differ according to the type of application and it’s usage. This may range from an FAQ list, 24*7 help line all the way to immediate on-site support. In case of open source you should get an idea of the group developers that does updates, otherwise you may get stuck with the functionality you have.
What is the Business Case for Open Source
Functionality
+ Fast delivery of add-ons and new functionality by developers community
- No timelines for new functionality
Technology
+ Push from new technology (enables new functionality)
+/- Will it still exist at the expected end-of-lifetime of the needed functionality?
+/- Code set of Open Source can be changed by Development team, what is the risk for the future and ability to upgrade?
Pricing
+ No license fees
- Costs involved for keeping team of developers available for support
Support
+ A lot of independent developers are working on the source set
- In case of emergency, who to call
- Dependency on the developers after go-live (developers lock-in....)
- No official support
And the Business Case for Vendor based technology
Functionality
+ Track record of core functionality
+ Timelines for new functionality
+ Industry solutions implemented
- Marketing and sales strategy may have a higher priority than functional and technical developments
- Speed of development on new areas
Technology
+ Transactional stability (proven)
+ Track record of core functionality
+/- The role in the ecosystem as a provider of stable out-of-the-box systems is sometimes forgotten ?
+/- How well does the vendor do with regards to upgrades to newer versions/technologies?
Pricing
- License and support costs
+ A good fit between functionality and Vendor based technology / Industry solution can results in lower implementation costs
Support
+ High level of support possible
So depending on the needs of the system, the four 'fit' factors Functionality, Technology, Pricing and Support make the case for Open Source or Vendor Based technology. Any suggestions are always welcome!
Written with Joost van der Vlies, thanks to Ruben Spekle
Labels:
Architceture,
Open Source,
Vendor Based Technology
dinsdag 24 maart 2009
Oracle Coherence experiences from the field
Last week I had a small conversation about Oracle Coherence on Twitter (#OraCohMDM). Here a more in depth description about the experience we've got with this product. Last year we came across a couple of projects which were an excellent case for the usage of a grid database based on the middle tier.
The first case was about processing bulk data in a small timeframe. Every minute some 150MB XML data was delivered (zipped) to our application by a Web Service. The requirement was that the data needed to be processed within 20 seconds and selected parts of the data needed to be send to to a bunch of subsribers. The architecture should be scalable for consumers and data load. For history reasons all the data needed to be stored in a database.
The second case could be described a having a high CIA constraint (Confidentiality, Integrity and Availablity). Multiple providers send data via Web Service, the messages were, contrary to the first case, small in size. The amount of messages however is very much higher, some 10-100 messages per second.
Since we know that (Oracle) databases are known to be stable, why can't we implement this with the help of a database? The answer is simple, there is a performance bottleneck between the middle tier and the database. In 'normal' situation, when dealing with for instance a web Based transaction application, the JDBC connection is able to do it's work properly. But when faced upon a situation with high data volume or high message volume, JDBC is clogging up. So how does a Grid database does a better job? Elements of success for a Grid database are the ability to scale up and down easiliy and minimize the risk of loosing data when for instance a server goes down. Preferrably there should be no master of the grid, all grid elements are equal. Simply put the Grid works as follows. A Grid is a series of interconnected nodes, which communicate via an optimized network protocol. Every Grid element has knowledge of the grid, what kind of work to do and the existence of their neighbours, just like bees in a beehive. When a new data object is entered in the Grid one Grid node takes responsibility and contacts another Grid node to be it's backup (preferably a node on another server). When a new node enters the grid, it communicates some sort of "Hello, I'm here", just like when you enter a party. All grid nodes start communicating with the new node, and (some) data is redistributed across the grid. This in order to minimize loss of data. When a node leaves the grid and again the (some) data is redistributed, this to avoid that a Data Object is only available on one node.
Oracle obtained Coherence (Tangosol) in 2007 because of it's excellent middle tier Grid capabilities. We've performed tests with Oracle Coherence in different hardware and network configurations. Our tests have shown that Coherence nearly scales linearly, with an optimum of around one node (or JVM) per CPU core. The addition of the JRockit JVM even removes further a smart part of the performance.
So how do you start with Oracle Coherence. Coherence is Java based, so a good knowledge of Java is essential. The data structure is stored as key/value pairs in maps (comparable to the Database tables). On each map a Listener can be placed, so an object change results into an event. By adding Object-Relational mapping (for instance with TopLink) you can percolate the newly added data into the database, or load the data from the database. By adding expiration time to the data within Coherence, you don't have to clean up your data.
Where to get more information
- SOA Patterns, deferred service state
- Oracle Coherence
- Oracle Coherence Incubator
Thanks to Oracle PTS (especially Flavius Sana & Kevin Li)
dinsdag 10 maart 2009
Cross domain Business Rules, who has got ideas?
We are moving away from a single application landscape towards a more distributed service oriented landscape. This has impact on the way we have to deal with Data Integrity. In the 'old days' we were confined to one dataset, in which we could check our data integrity by means of simple programming with triggers and constraints, or the addition of a Business Rule Engine which is bound to the datamodel. The only discussion we had was about the place where rules were checked (in the screen or in the database).
I am currently doing (nice) work for a company where they are moving away from their ‘single application’ back-offices (one being USoft, a Dutch company which used to be known as the Business Rule company) towards a Service Oriented environment. The landscape is going to be separated in domains, based upon ownership of the different functionalities.
Business Rule thinking is part of my clients DNA, so we really have to define a thorough solution when validating rules across domains.
The rules of engagement are
-Domains are independent
-Transactions executed in one system may not be hindered by restrictions defined in another system
-Business rules are defined and owned by the domains. These rules can be used across domains, but are always started by the domain who owns the rule.
How do we deal with cross domain Business Rules?
Consider for instance two domains, Relation and Education. Within the Education domain we have a business rule 'An active education can only be connected to an active relation'. It is the responsibility of the Education domain to uphold this rule. When the relation is de-activated within the Relation domain, this is allowed, since the Relation domain is not responsible for the Education domain. So at this point in time we have an invalid situation.
How to deal with this?
We could agree that the Relation domain sends information about the de-activation to a central messaging mechanism where other domains can subscribe for these type of messages (i.e. the Education domain). The Education domain is then responsible for taking actions in order to create a valid situation. This could be the closing of the Education belonging to the Relation. Part of the Governance of the Education domain is then that periodically should be checked if open educations exist which belong to closed relations. This to be sure that invalid situations have occurred and the subscription mechanism did not work right.
The ease of checking our Business Rules and uphold the data integrity in a single application environment is gone in the distributed service oriented world. Who has got experiences/ideas how to deal with the cross domain validation problem?
donderdag 26 februari 2009
Ever had the feeling that you are an Information Archaeologist?
How many times do you have to find out what the functionality of a systems is and what the data means? A lot of times you come to the conclusion that the documentation is not sufficient or (most of the times) not available, and that the people who've build the system are long gone. The only thing that remains is "The Code is the Documentation". Well then you end up in what used to be so neatly called 'reverse engineering'. Actually what we are doing then is the modern form of Archeology. Scraping layer by layer of code, data and applications in order to understand the true meaning of the application landscape. The Software Archeology excercise is mostly done when we being asked to rebuild an old application into a new landscape. Most of the times we put developers to work who are trained in new technologies, but have got no clue whatsoever about the old technology in which the application is build. The result is that these teams are recreating over and over again the 'Stone of Rosette', or the key to translating the functionality from the code. We tend to forget that the system is not only build based upon functionality but also about what the technology enables and what the strenths and weaknesses are. So for instance the reason why procedural languages are used in heavy batch applications (performance....) only comes to live when the application is rebuild into an Object Oriented application, resulting in bad performance. So the AS-IS situation of a landscape, and all the reasoning behind it, is almost as (or even more) important then 'the Future state to be'. The roadmap towards the Future state to be is the task of a (Software) Architect who knows not only knows the 'New technologies' but has a strong background in the 'Old stuff' and all the reasoning behind it. Only then can we move towards a Stability Based Architecture (SBA).
dinsdag 17 februari 2009
The Data Refinery
Two years ago I was involved in a RFID project in the logistics market in joint Capgemini and Oracle effort. Based upon the EPCIS Architecture we had to combine movement data of crates between different storage facilities and stores. Without going into details about the case, on the technology side the biggest challenge in RFID is in the huge amount of data and how to get data out which makes sense for a particular use case. We came up with a division of the life cycle of RFID data in three main segments (in discussion with Ard Jan Vethman) (1) In real time (within seconds), you want direct actions, a crate is entering a room where it is not supposed/expected to. (2) In short term (hours, days, weeks, depending on the business case), you want to perform actions based upon a condition based upon multiple measurements. for instance when a shipment of fresh food between a factory and a warehouse normally takes 4 hours, you want to send an an alert when the shipment takes a day. (3) In longer term (more then weeks), you want to see how your processes are optimized against the measurement which are received. You can look at this as a Data Refinery, bulk data is processed as it flows in (complex event processing), the result or residue is a high quality small subset of all the data which is entered, the exceptions from all that is ok. The residue is then used as events which can be send to a Business administrator and systems administrator. Oracle provides the following technologies which can be used for this functionality. For the longer term we've known BI for a long time. But what is provided in the real/short term. For the short term data Oracle Business Activity Monitoring (BAM) has now been around for some years, and provides excellent insight, as well as graphically as with alerts. For the real time processing we had to program the event processing ourselves, but recently Oracle came up with Complex Event Processing (CEP) as a COTS product. This enables querying data on a data stream. Getting insight in a near real time data stream and do some cherry picking is still a new and under developed area and is an exciting area.
maandag 9 februari 2009
Aim for Stability Based Architecture (SBA)
Software Architecture is the art of translating Business needs into a suitable and working technical solution.
The same as in architecting and creating houses our sponsor expects that the product will be stable and last for quiete a time. Looking back in time we see that at the time of Cobol, applications were able to last more then 20 years (and even now exist, even after the retirement of their creators). But with the speed of new developments and the coming of hype driven development the application lifespan (or time that it gets outdated), is getting less and less. Java, among others, has got a bad reputation in this area, for instance Struts lasted some 2 to 3 years, after which you were the laugh of the year when you decided to choose Struts in your solution. SOA is another example, finally it paid of working with it, when somebody decided that SOA is dead and we have to run after the latest hype, Cloud computing. The half-time of a Software Architecture is getting shorter and shorter, and it happens too often that at the time of go-live we almost have to think about upgrading, because the version is almost outdated.
I'd like to propose a new architecture type, Stability Based Architecture (SBA), in which we aim for stable (core of the) application and add an expected life-time of the application. The core of the application should be stable and last for a long time, just like rock under pressure, it folds but doesn't break. Only then can we justify the costs of implementation.
Labels:
Architecture,
Open Standards,
SBA,
SOA,
Stability Based Architecture
maandag 26 januari 2009
The Return of Oracle Forms
The Return of Oracle Forms
Oracle Forms has been around now for some 20 years, it looked like to get extinct over time, but the financial crisis might give it second chance.
Oracle Forms is a GUI used typically in transaction based applications for high volume data entry. Forms started of as a character based application in the 80ies, it evolved into a GUI client in the early 90ies and now can be deployed as a applet based application used within a browser. The basis of Oracle Forms hasn't changed much. A dedicated Forms process is connected to a Database. In the early days this Forms process was tightly connected to the GUI. In the current implementation the Form process is located on the Application Server and the GUI is located in a browser on the client. Big difference with a 'normal' Web Application is that the connection between the Forms Client and the Application Server is stateful whereas with a a 'normal' Web Application it is stateless. This articulates the type of usage, for every user a separate Forms Process will be started on the server, which consumes processing power and memory. In short, Forms is typically used by a known community, a Web Application can be used by an unknown community. When looking at application types, Oracle Forms is excellent for high volume data entry, and aligns with non functionals like minimal mouse usage etc. In terms of development effort Oracle Forms still is way more productive then any Java based application (roughly 4 hours per function point versus 8 hours).
A couple of years ago the statement was that Oracle Forms is outdated, but slowly it gets back in the picture of Oracle.
In The Netherlands we still have a large community of clients with (core) applications working on Oracle Forms (and not the latest version.....). So for those clients talking about SOA is shouting from over the horizon. And certainly in this financial climate new investments are hard to get.
Currently the step towards SOA is too big (and expensive) for organizations, but revitalization of the core systems build based upon Forms will be a must the coming years. Forms can be used as stepping stone towards a SOA based architecture. Forms can nowadays be used as a channel service exposed in a Web Portal(see also ADF Forms) and also can call out to (Web)services. And Oracle is investing again in Forms Forms modernization.
So is Oracle Forms going to get a second chance?
Labels:
Forms,
Modernization,
Oracle,
Revitalize,
SOA
donderdag 15 januari 2009
Desiging XML, reuse is nice but aim at usage first!
Designing Data has been, for ages, the cornerstone for all applications. It relates to all information needed and shared between the applications. This is no different when we look at the SOA world. Actually it starts to get even more important. First of all, for what data is a Service responsible? And secondly what information is shared between the Services.
The first part is all about the definition of a data model, as (mostly) used in a database. This definition is used for storing the information and assuring validity. The second part is about the definition of XML documents, as used in the definition of Services boundaries, this data is more fluid floating around between services.
Now what makes a good Data Designer? In the Database world it took us a while to learn how to define a thorough data model. It is not only about WHAT data is needed but also HOW the data is used. What type of queries are to be expected. What are the quantities of the data per functionality. How is the data related to one another, etc. A lot of 'old fashioned' techniques, like NIAM, helped is us in the past in defining a good data model.
When I look at the XML world, to my surprise, a lot of the 'good old designing tricks' are forgotten. In projects I come across XML definitions (coming from standard organizations), which are bound to fail in practice. Yes, all the data needed for the functionality is there (the WHAT question). The HOW question, however, most of the times, is completely forgotten. In theory it looks very beautiful creating an XML tree containing all functionality needed in the world. In practice this means, most of the time, that the ratio 'actual needed data' to 'Total data send' is less the 5%. This puts a burden on performance and network load.
It is excellent that one XML Document can contain all the information needed to share across all the services, but in practice this means that when you change one part of it, all services need to be changed.
With the ever increasing need for exchanging data it becomes essential that we should aim for a methodology for designing XML documents. In the meantime, let's first use the KISS (Keep it Simple Stupid) principle, and see if the XML design really works in practice, on both a functional and usage level. Only after that start looking at reuse.....
The first part is all about the definition of a data model, as (mostly) used in a database. This definition is used for storing the information and assuring validity. The second part is about the definition of XML documents, as used in the definition of Services boundaries, this data is more fluid floating around between services.
Now what makes a good Data Designer? In the Database world it took us a while to learn how to define a thorough data model. It is not only about WHAT data is needed but also HOW the data is used. What type of queries are to be expected. What are the quantities of the data per functionality. How is the data related to one another, etc. A lot of 'old fashioned' techniques, like NIAM, helped is us in the past in defining a good data model.
When I look at the XML world, to my surprise, a lot of the 'good old designing tricks' are forgotten. In projects I come across XML definitions (coming from standard organizations), which are bound to fail in practice. Yes, all the data needed for the functionality is there (the WHAT question). The HOW question, however, most of the times, is completely forgotten. In theory it looks very beautiful creating an XML tree containing all functionality needed in the world. In practice this means, most of the time, that the ratio 'actual needed data' to 'Total data send' is less the 5%. This puts a burden on performance and network load.
It is excellent that one XML Document can contain all the information needed to share across all the services, but in practice this means that when you change one part of it, all services need to be changed.
With the ever increasing need for exchanging data it becomes essential that we should aim for a methodology for designing XML documents. In the meantime, let's first use the KISS (Keep it Simple Stupid) principle, and see if the XML design really works in practice, on both a functional and usage level. Only after that start looking at reuse.....
woensdag 7 januari 2009
Plea for standardization on consumer good chargers!
Once a month I do some travelling. Whether it be for business or fun I usually carry around my gear I need, laptop(s), phone, eBook reader, IPod, camera etc. All these appliances need to be charged once so may days. No problem with that, but what annoys me is that EVERY appliance needs a DIFFERENT charger in different type, size and weight. It gets even worse, there is no charger that I can reuse! And when I travel from country to country I also need adapters for power plugs, and even sometimes, adapters for adapters. The whole IT world has been buzzing about SOA talk (Service Oriented Architecture) with the basis that everything should be based upon Open Standards. But when you look at the basic infrastructure for consumer goods, every brand delivers it's own type of charger. What is the problem with defining a standard for charging consumer goods? Let's apply the SOA principles also here! It will save me (and all other travelers around the world) from carrying around a surplus of wires, adapters and chargers.
Thanks in advance!
Abonneren op:
Posts (Atom)
Labels
11G
ADF
Architceture
Architecture
BAM
birdwatching
BPMN
Business Process Management
Business Rules
Capgemini
Case Management
CEP
Coherence
CORA
cradle to cradle
crane
Data Grid
Data Refinery
Design
EDA
Event
Forms
Green label
IT
Modernization
Open Source
Open Standards
Oracle
Power grid of applications
Process
Revitalize
Risk
roadmap
SBA
SCA
SOA
Software Archeology
Software Engineering
Stability Based Architecture
Strategy
Sun
Sustainable society
unen
Vendor Based Technology
XML
About me
Link
- www.elzmiro.com