Tuesday, November 24, 2009

Success criterions for development projects

Many factors do impact the success of development projects these days, much more than a couple of years ago. I have learned to focus on a very limited number (3-4) of key objectives to achieve in large scale projects in order to stay on track. Key objectives are the principles of a project and should be well-know, understood and accepted within the project team. To miss even one of them means failure. If it comes to decision making, the key objectives build a good foundation to move forward. This is a proven approach for projects but also for the development process itself. Here are my four (4) key objectives for successful system development:
  1. Precise, high quality requirements based on strong user involvement
  2. A motivated and skilled team with the ability to learn constantly
  3. Flexible and realistic project management
  4. An open and innovative environment which understands the fact that software development is a heuristic process and which accepts failure
I don’t want to simplify system development. I do know that there are many other factors and ramifications. But people tend to get lost in too many details and micro-management tasks. Concentrating on the essentials does lead to successful development in a very complex world.

Saturday, October 31, 2009

Security Architecture – an approach to outline a framework

Security in the scope of vast, distributed systems needs to be specified, designed, implemented and operated based on a solid framework – let’s call it a Security Architecture. I have seen many approaches in order to cover this tricky task. Many of them tend to be too complex. Unfortunately, complexity is not a driver for security (in contrast to simplicity). On the other hand, it’s a tough job to keep the Security Architecture for huge systems simple. Beside the need for a simple approach, transparency and clearness in the scope of Security Architecture are important attributes that should be addresses as key-objective. Security controls need to be structured and encapsulated in the relevant components of the Security Architecture in a clear and traceable manner. I prefer a structure consisting of the following main components:

  1. Security Infrastructure [ Communication and Network Security, Perimeter Security, …]
  2. System Security Services [ Access Control, Identity Management, Credential Management, Audit, Backup and Recovery, …]
  3. Application Security [ Operation Systems, Databases, Web and Application Server, SaaS, Enterprise Applications, Collaboration, and Messaging, … ]
  4. Service Security [ System Maintenance, System Operation, Change Management, Incident Management, Event Management and Forensics, …]
  5. Security Management [ Policies and Roles, Risk Management, Training and Awareness, … ]
The components 1-4 are the basic layers of the Security Architecture. A more vertical component is Security Management which covers and affects all the other 4 essential parts of the Security Architecture.

Saturday, October 24, 2009

Quality of Non-Functional Requirements

As already outlined, non-functional requirements are a crucial success criterion in distributed systems (and in software development in general). These requirements need to be prioritized in order to focus on the main use-cases of the system. Beside prioritization, using a clear syntax is important as well because non-functional requirements tend to be fuzzy. This limits the acceptance during development. Like their functional siblings, non-functional requirements should adhere to the following criterions as well:

  • Clear and non-ambiguous
  • Described by using simple and consistent terminology which is well-known by all stakeholders
  • Testable at the end of the day in order to achieve a measurable outcome
  • Traceable from the beginning until the end (architecture, design, implementation, test, roll-out)
  • Technical feasible considering the tools and systems that are part of development and deployment scenario
  • Realistic in realization which depends on the planning horizon, the skill-set and location(s) of the team, the infrastructure and the development environment

Ideally, a designated requirement manager and a software architect are the perfect team members to make this happen. All stakeholders should agree on this proceeding in the beginning and are asked to monitor the adherence over the whole lifetime. “Lessons learned” are a good approach to refine this process. Good and bad examples should be used to tune a successful requirement management to perfection.

Friday, October 23, 2009

Defeating OCSP – is it that ez?

Certificates in the scope of asymmetric cryptography are an approved means to achieve a solid state of security in many applications (using TLS/SSL predominantly). Options in this area of computer security are limited. Establishing trust and running a CA (Certificate Authority) is cumbersome and need a lot of resources. A widespread usage of certificates comes along with a higher number of revoked certificates that must be identified in order to deny access. Certificate revocation lists (CRL) and the OCSP (Online Certificate Status Protocol) are options to establish these check points. To keep CRL’s up-to-date and to handle their constant growth, is a well-known problem. OCSP provides this status on a server in a centralized approach. A check point (server, client, any consumer of a certificate) can ask the OCSP-server in order to make sure that the presented certificate is not revoked. But can we trust the response? Not in any case as we could learn from a security expert recently (check on heise security and other resources). ResponseData and ResponseStatus are different structures within the response but only ResponseData is covered by a signature. A faked response (by a running a man-in-the middle attack) could send a “tryLater” which is a valid status. It’s up the OCSP-implementation at the check point to handle this response properly. And. it’s up to you to imagine how this is handled in a real-world implementation. It’s a kinda scary if you can’t even trust a security service indented to provide “more security”.

Friday, August 21, 2009

More on Software Architecture – Architectural Styles

Our domain of software architecture is improving constantly and gets specified in more detail. This is not bad thing because it helps to avoid misconceptions about their place in software development. Architectural styles are one building block of the process in striving for the best fitting solution. Looking for analogies, ‘real architecture’ in terms of brick and mortar has a long history in architectural styles like Gothic, Tudor, Art Nouveau, or Postmodern. These styles are influenced by social, political, cultural and other trends. Of course, technical factors like material, technologies, infrastructure and appliances play an additional role (and money, of course). In software architecture, these factors are predominant. But trends and hypes play their role as well. SOA is a good example to illustrate this. In general, architectural styles can be clustered based on the part of the system they describe like structure [Component, Layered], deployment [Client-Server, n-Tier, Peer-to-Peer], communication patterns [Message Bus], and others. Sometimes it is hard to assign a style properly to cluster. Styles to handle user interaction [MVC] might be a part of the structure or could be seen as a separate cluster. But this is not important. More relevant is the process of combining styles in order to achieve the best solution. Some styles might be determined by the requirements directly. Hardware, integration of legacy system and other infrastructure related preconditions might limit the choice. Moreover, the skill set of the team is another factor. In general, functional and non-functional requirements should influence the architectural styles and their combination used to build the system (hypes should not btw.). If this is done properly, the benefits are proven patterns, a common understanding, interoperability and reuse. In other words, the combination of the best fitting architectural styles is a key success criterions in the process of building a software system.

Tuesday, August 04, 2009

Software based Transactional Memory

This is interesting news for C# developer. MSDN DevLabs offers the STM.NET as an extension to the .NET framework 4.0 (Beta 1) for experimental purposes. STM stands for Software Transactional Memory and provides support for concurrency in order to use the multi-core architecture efficiently. STM does not provide a lock framework. It comes with an approach to isolate shared state which is the real issue when it comes to concurrency (because of dead locks and/or race conditions). Based on the ‘transaction pattern’ used in other areas of computer science, code (and memory) is handled isolated to enable an atomic execution. Code sections are demarcated accordingly. It will be interesting to check on this, especially pertaining to performance implications. But anyhow, it might be a promising step, at least for the C# folks.

Friday, July 24, 2009

Security in the scope of Software Architecture

Security must be addressed in the development very early. Taken existing and well-defined security requirements for granted, the system and software architecture (an artifact called “Architecture Specification”) must consider and reflect security. I do favor an approach based on a set of essential building blocks in order to achieve the expected level of security. Parts of the building set are:
  • Secure Components
  • Secure Infrastructure and Services
  • Secure Execution Environment
  • Secure Network Environment (zones, compartments, sandboxes)
  • End-to-End Security (supported by services like identity, authentication, authorization, auditing)
  • Secure Operation (Logging, Import/Export, Backup/Restore) and Security Appliances

The approach addresses common security paradigms like “Layered Defense”, “Security in Depth” as well as general design objectives (modularity, consistency, extensibility, robustness). These building blocks are the foundation for a security architecture where security controls can be applied. Just to drill a little bit down. Secure components can be characterized as in the following:

Design and composition of components are essential steps to meet the requirement for a sustainable architecture. Components must be secured in accordance with recommended practices. Design and implementation must adhere to security principles, design patterns and coding rules. They must be configured according to the security policies of the organization. Remember the weakest link paradigm; one weak component could compromise the security of the whole architecture. Components that expose interfaces to the “outside world”, like user or communication interfaces are especially under attack or even the entry point for an intruder. This must be considered when specifying, designing and developing these entities. And, interfaces must be well-defined to support an integrative approach in order to achieve end-to-end security. The overall security requirements for the component design should be derived from general security objectives such confidentiality, integrity, availability, and accountability.

Wednesday, July 22, 2009

Software Architecture and Requirement Management…

… should be close friends that like to communicate and interact. Why? Software Architecture is the main and first interface between the requirements coming from different stakeholders and the development team. Based on the requirements, the “Architecture Specification” will be developed. Requirements are fetched from very different sources, depending on the domain. The subsequent bullets list just some of them, clustered as Functional (FR) and Non-Functional Requirements (NFR):
- Customers (FR)
- Existing Platforms, Mainline (FR), (NFR)
- General Market Requirements (NFR)
- Standards and Regulations (FR)
- Best Practice and Patterns (NFR)
- Quality Attributes, preferably prioritized, utility trees are recommended (NFR)
As a result, the “Architecture Specification” should reflect all requirements as well as their importance and emphasis in the project. Any mismatch (or even missing requirement) can be detected in the scope of a review or even a architecture test. This is good news because it avoids very expensive changes in later steps of the software development process.

Tuesday, July 21, 2009

Software Architecture (is alive and kicking)

I’m about going back a little bit to the roots, trying to write and blog more on software and system architecture. It might be necessary because it’s the foundation for all the distributed and web applications as well as for security architecture. It’s ez to get lost in all the details that come with these challenging topics. And, of course, it is my core business and field of expertise.
Software Architecture is the highest level in the area of software development (but it is not superficial or shallow, not at all). Software Architecture is the foundation for all the other more detailed development steps that will follow in the life cycle of a system. Because of its early position in this process, Software Architecture is an important success criterion. And because of this fact, it should be tested, at least by a very detailed review. In order to be testable, Software Architecture must be documented, preferable in a single document called the “Architecture Specification” based on well-defined views. Diagrams and figures are mandatory. The quality of the Software Architecture affects the quality of the whole system in creation predominantly. A well documented and widely teached Software Architecture is a perfect guidance for the development team. Project management needs it to make parallel development on components happen. It is highly recommended to communicate the Software Architecture to all other relevant stakeholders: Customers, 3rd Parties, Marketing, Operations & Services, and Test Teams. More is about to come …

Tuesday, June 30, 2009

Firefox 3.5 is just around the corner

I have checked on the new version (RC) of the browser during the last few days – and I’m really impressed. Even the performance of complex and (java-script) heavy-weight web sites is formidable. Beside this increase in real user experience (driven by speed), the following technical characteristics are worth to mention:

  • HTML 5.0 support - which includes offline data storages & access (I got still my security concerns.), video and audio support which makes plug-ins obsolete (sure, it needs the supported format/codecs) and other features
  • A new JavaScript Engine – which is one reason for the significant increase in performance
  • Privacy Support – it helps to limit the data you leave behind when browsing around; the private browsing mode allows this (no cookies, no history, no caching, no auto-filled stuff)
  • Enhanced Search Capabilities – added to the existing URL bar capabilities that are manifold and fast
  • Geo-awareness – web apps that need this information can fetch the data from Firefox 3.5 (sure, this needs your okay to do so)
  • Many other changes and enhancements that make browsing the web more fu

If you like Firefox, go ahead and upgrade to version 3.5. The new version should be available by the end of the day.

Wednesday, May 20, 2009

Rock meets Search Engine

Are you looking for a rock concert in Germany? Try the event-page www.hooolp.com and you might find out that your favorite band plays close to your hometown. But this cool tool is not just about rock’n’roll. It’s also good for jazz, blues, ska and even plain pop. And it works the other way around. You can register your event on www.hooolp.com to reach a broader audience. On this page, an innovative location based service meets rock’n’roll. Check it out!

Tuesday, April 28, 2009

Identity is king

Many large distributed systems have on success criterion in common – identity (management). This is true for social networks (we all love to be part of), e-commerce platforms, systems operated in the clouds as well as for networks in the realm of automated demand/supply operation (aka Smart Grids). The requirements are not new at all: the identity of a large number of participants must be handled in a way that peers can trust each other based on one or more identity providers. Identity is needed for authentication in order to enforce access control to a resource (a website with profile information, a virtual shopping cart, a database table, a data point, whatever). It’s about the identity of the subject (the source) which has initiated the request to get access to a resource. Before the access rules can be applied (authorization), this authentication must be handled in a trustworthy way. This is complex to achieve, especially in case of multiple domains that operate their own realm of trust. This kind of trust is a precious thing that needs to be protected and maintained. Beside all theory and technical details, it (the precious thing of digital identities in an existing community) is an important asset. A social network identity could be used to get access granted to other resources like a virtual shopping mall or a booking engine for last minute flights. More scenarios are obvious ….

Sunday, April 19, 2009

Folks, it's the spring!


In times of the 'GREEN IT', it's absolutely necessary to post such a picture on a software blog! ;-)

Friday, April 17, 2009

Software Architecture and balancing stakeholder needs

I recognized recently that I blog a lot about computer security (because of several reasons). This is definitely not a boring topic. On the contrary, it’s complex, fascinating and a fast moving target. You need a decent understanding in computer science to keep up with all the things going on out there. On the other hand, the area of system security (comprising computer and network security) is full of misconceptions. Too many people still believe that a firewall is the silver bullet to keep out attackers, worms and other malware. I don’t comment on that anymore. But this never ending discussion leads me to one of my core competences – software architecture and all the team, communication and development related aspects. A software architect needs to understand stakeholder needs. Typical stakeholders are end user, developer, test stuff, marketing folks, project manager, just to name a couple of them. But understanding and documenting is not enough. A stable and successful architecture balances stakeholder needs and reflects this in all their lifecycle stages, from requirement management until testing, delivery and maintenance. I know this happens just in theory, in an ideal world somewhere in a software glasshouse. But we should strive to come closer. Achieving tradeoffs is an important success criterion in the process of creating a stable and lasting architecture. These tradeoffs should be the result of negotiations with the ultimate goal to come to a win-win situation for all participants (àstakeholders). From the technical perspective, most of the tradeoffs must be achieved between functional and non-functional requirements (aka quality attributes). From this we see that there is s strong link to security as an essential quality attribute in a connected world of ubiquitous computing and routed protocols. Many design decisions in the scope of security architecture are in marked contrast to the ideas of usability folks and the needs of project management. But these contradictions must be addressed and resolved, which might be a tough job. Anyhow, for the sake of a successful product achieving broad market acceptance, failing in balancing stakeholders needs is not option.

Wednesday, April 01, 2009

Computer Security in the scope Web 2.0

The current issue of ACM Queue puts Web Security in focus. One article is titled Cybercrime 2.0: When the Cloud Turns Dark. In essence, it is really hard to disagree. I could just add a couple of web attack scenarios based on update services or instant messaging. A lack of security in the design of web applications and the underlying infrastructure is the root cause, as stated in the conclusion of the article. But it is really hard to see some kind of remedy in the near future. On the contrary, new solutions like offline web-applications, cloud computing and the so called Web-OS are all based on vulnerable technologies but connect a large number of users and machines. This will increase the attack surface because each single hole in the system might give an attacker access to a large network of assets and services. Some people call these new applications and architecture already Web 3.0. Unfortunately, nothing has changed in terms of security.

Thursday, March 19, 2009

Waiting for my Netbook

There are rumors about a netbook made by Apple. Beside many other sources, Gizmodo came up with a couple of information and pics (touchscreen, …). The assumption that this gadget will be available before Christmas sounds logically. It makes my life easier picking up the right present for myself … :-)

Silverlight 3 at Mix09 / Las Vegas

The most important fact is that Silverlight 3 Applications (basically a subset of WPF) can be deployed and executed outside the browser in a sandbox. Beside this deployment scenario, advanced video features (H.264) and an updated version of Blend are the most remarkable renewals for the smart client ecosystem.

Thursday, March 12, 2009

More testing tools for parallelization

Intel® offers a so called “Application Concurrency Audit Tool” for free. The Intel® Concurrency Checker 2.1 is available for Windows and LINUX and can be downloaded from their software network. I started to play around a little bit with the tool. It provides a decent overview on CPU utilization, elapsed time, parallel time, utilization regarding threads, and levels of concurrency. It allows to attach running application executables as well as to test java apps.

Wednesday, March 11, 2009

What's on my reading list?

Four books, basically:
  • the Long Tail by Chris Anderson
    ... recommende for all people interested in the new economy and e-commerce
  • Outliers by Malcolm Gladwell
    ... its about genius
  • Das Scheekind by Nicholas Vanier
    ... a musher travels BC and Alaska with his wife and a baby
  • Mechanics of User Identification and Authentication, by Dobromir Todorov
    .. it's for geeks

Friday, March 06, 2009

Offline-Web Applications & Security

We can read a lot about Computing in the Clouds these days, even in ordinary newspapers. It’s a big business with SOME open questions. I started to compile a couple of thoughts in Web Browser, Web-OS and the Era of Cloud. Beside the real differences to Client-Server Computing (“Dude, sometimes I can spot them, and sometimes not!”), I do have my concerns pertaining to security. Take the so called Offline-Web Applications (sometimes called Web 3.0) for example. Beside the fact that this word is a contradiction in itself, the vulnerabilities are an existing problem. Running web servers everywhere increases the attack surface. The HTTP-servers on the client machines are needed to keep the applications (that are web applications) running in case of a network blackout. In addition, to maintain state is another must to allow a kinda real application feeling. Maintaining state in the scope of web application based on HTTP with all consequences has been a security problem from the beginning. Nowadays, state is maintained by using cookies and other remnants initiated and used by browsers and plug-ins. Talking about Offline-Web Applications, small databases on client-side are in use. But this list is not complete yet. HTML 5 specifies a Structured Client Side Storage which includes database storage (local and relational). Some Web-browser vendors are planning to support to a certain degree (session, local, database). This will change attack scenarios as well as attack surface. Combined with excessive scripting, but this is another story …

Dresden’s castle got a (new) roof

webduke pics 2009

Thursday, February 19, 2009

(These Days) Development Skills

I do remember times very well when software development was envisioned as a process of combining components (either ActiveX or some kind of Beans), and smart people started to perceive software as a kinda utility (like water, electricity or network access). I did not comment on this. Talking about the technical aspects, not much has changed. Writing (beautiful) code is still a creative and a heuristic process. And, beside patterns and practice, it needs strong skills and a lot of experience. Again, there is nothing new about this (since 19** ?). Just two examples from today’s problem spaces: parallelization and distributed apps on a large scale. To cope with today’s hardware platforms (multi-core) needs a lot of knowledge and diligence in developing applications running on them. There is no magic tool, workbench, compiler, etc. to solve such problems implicitly, not yet. You need a decent understanding of processors, cashing, threads and shared memory. Sure, testing this stuff comes with another steep learning curve. Secondly, largely distributed systems (take upcoming clouds as an example) do need a different approach pertaining to availability and consistency (see my posts on BASE, CAP, etc.). It’s up to the development team to hide the trade-off between data consistency, system availability, and tolerance to network partitions. This is not a piece of cake. It needs new approaches and thinking models, pretty similar as when we moved from the client server-paradigm to more or less simple distributed- application-patterns. In essence, the heuristic part of software development is alive and kicking.

Monday, February 16, 2009

Security in Cloud Computing (Distributed Systems)

Security is one of the most important requirements to make a software system running in the cloud acceptable for the intended user community. This is especially true in times like this where people’s privacy is under attack on a daily basis. Just follow the news in Germany. It’s a big concern and not far fetched, not at all.
Computer security got a couple of basic pillars; Identity Management is one of them. In the new realm of cloud computing, this comes along with authentication and authorization in distributed systems. SAML (the SAML 2.0 protocol) and OpenID are more or less standards to support the implementation, also in terms of interoperability. Big vendor’s cloud architectures (just see the Geneva project as an example) do support these standards. This is not just a good approach in terms of interoperability; it also leads to a better understanding and visibility regarding the underlying implementation and infrastructure which probably leads to more trust and better acceptance.

Sunday, February 15, 2009

New Pics on Panoramio

Friends, a added new pics to my Panoramio Account. Just check on: http://www.panoramio.com/user/1514395

Thursday, January 08, 2009

My technical wish-list for 2009

Talking about the New Year - What do I expect from the technical perspective? Here comes my wish-list which is a blend of my expectations and the trends I see in general for 2009.

  • A better support for effective parallel programming, also with a more implicit approach
  • New solutions to interact with smart devices (cell phones) to overcome tiny keyboards and cumbersome handling
  • A synthesis of a Handheld-GPS and a simple mobile phone to reduce the number of devices in the outdoors (let’s call it a rugged GPS-Phone)
  • Cool applications (and gadgets) making use of the so-called “cloud computing”
  • GPS and RFID in much more tools and gadgets (cameras, mobile phones, bikes…) with the option to switch it off anytime
  • Location Based Services (ez to use, respecting privacy, useful) with real benefits for the user
  • NetBooks for all known OS platforms
  • Home automation for mainstream households; many use cases are conceivable and could help saving energy, this would give the buzzword “Green IT” a very new meaning
  • More awareness of security and privacy issues in a connected world which leads to new options to protect digital information, assets, and people’s privacy
  • E-books based on gadgets and applications that create a new reading experience; don’t get me wrong, I will always stick to real books made of paper but I see E-books as a interesting alternative beyond the advantage to carry a lot of books wedged in a handy device when travelling
  • A new album from my favorite band TOOL
  • Anything to add? Feel free to comment.