Wednesday, December 22, 2010

Ein gesegnetes Weihnachtsfest!




















Merry Christmas and Good Times in 2011!


Tuesday, October 19, 2010

It's autumn













River Elbe, October 2010 [webduke pics]

Friday, October 15, 2010

Security must be based on a solid (security) architecture

We can read a lot about vulnerabilities, malicious code and horrifying threat scenarios these days. And, we can also learn from all these experts how to fight this. Actually, there is nothing about war and weapons (that could help anyhow). Everything is about solid requirement management (covering security from the very beginning), a decent architecture as well as a design which addresses security seriously. Sure, the team must be qualified to handle this. Just some thoughts: A sustainable architecture is composed of discrete elements, called components. Components are the core parts of architecture. Their design and composition is essential to meet the requirement for a sustainable architecture. Beside these factors, security is another success criterion. Components must be secured in accordance with industry recommended practices. Design and implementation must adhere to security principles, design patterns and coding rules. They must be configured according to the security policies of the organization. This must apply for all components the architecture consists of. Remember the weakest link paradigm; one weak component could compromise the security of the whole architecture. Components which expose interfaces to the “outside world”, like user or communication interfaces are especially under attack or even the entry point for an intruder. This must be considered when specifying, designing and developing these entities. And, interfaces must be well-defined to support an integrative approach in order to achieve end-to-end security. The idea behind this is that a system that is composed of components must assure security when sending or receiving message from on or more component to another and even beyond the system.

Wednesday, July 28, 2010

Robustness and resilience of large distributed applications and networks

In the area of clouds and large distributed automation and control networks, we need to deal with a vast number of (growing) endpoints integrated in the (dynamic) system. It is probably a misconception to assume that all these peers could be protected comprehensively at any time. Hence, it must be an important objective that the protection of the entire system must not depend on the security status (pertaining integrity, confidentiality and availability) of each and every endpoint. In other words, a compromised node must not affect or infect the stability and protection of the entire distributed system. This shall be adressed in the system and security architecture and needs to be defined (and tested !) as a crucial requirement. A (layered) defense in depth, as a general design principle, can help to meet this requirement. In addition, intrusion detection, prevention and a quick isolation of the compromised node can help to minimize the overall impact. Plan for failure is the underlying principle to implement this efficiently. Beside these classical security precautions and controls, a robust design as well as adequate redandancy mechanisms for critical subsystems can support the system stability.

Tuesday, June 22, 2010

Security in large distributed networks (aka Smart Grids)

Security is not only a crucial requirement for conventional data and communication networks. It must be also addressed in networks that are installed and operated to automate and manage energy grids in order to achive a Smart Grid. Definitions may vary but the need for security in the area of critical infrastructures is undisputed. Beyond architecture and compliance, real implementation requirements exist. The paper Enhancing IEC 62351 to Improve Security for Energy Automation in Smart Grid Environments presented at the 2010 Fifth International Conference on Internet and Web Applications and Services in Barcelona provides insights.

Wednesday, June 02, 2010

Test your security!

Testing security of distributed systems is a very complex thing (sure, security is complex inherently). This is because of the nature of security requirements which is functional as well as non-functional. To meet such basket of requirements, good practise is highly recommended. The subsequent bullets list the necessary steps in a proposed order to achieve this goal:
  • Document all functional and non-functional requirements and develop use case scenarios base on it (a picture helps a lot !)
  • Invite security professionals for support and guidance
  • Conduct a comprehensive threat assessment based on a well documented system architecture and (preferable) a security architecture (invite all relevant stakeholders: product management, architects, developer, test folks, …)
  • The architecture must support flexible patch and update management
  • Review the resulting design, at least the security relevant components
  • Check on all 3rd party components in detail to identify known weaknesses; if so, look for alternatives
  • Provide and teach (!) secure coding and secure design principles to the team
  • Make sure that the team has enough time to learn and to apply such rules and principles (project management must plan accordingly!)
  • Test all functional security requirements accoring to your test specification (use well documented requirements and use case scenarios to specify test cases)
  • Use tools to check your code to identify flaws and derivations from your guidelines mentioned above
  • Apply code review if tools are not sufficient
  • Use a realistic test environment (set up) to run a kinda black box test based on tools (fuzzer, etc.)
  • Test especially all user interface (focus on web based interfaces) as well as communication stacks
  • Document all testing results and establish a rating based on criticality
  • Communicate and share your experience

Friday, April 23, 2010

Divide and Protect

Divide and conquer is a well-know strategy in software design and architecture. In terms of OOA/OOD it is not really my favorite approach, but this is not the topic of this post.
Divide and protect is one option to secure large distributed systems. The concept of Divide and Protect is about the compartmentation of a system into functional blocks with identical requirements in terms of security and privacy. It supports a defense in depth strategy, and it helps to handle the complexity of large installations. The compartmentation of a given system leads to security zones with different levels of trust that should be outlined in a digram. Based on such diagram (red = not trusted, .., green = trusted), the system architecture can be developed in a comprehensive manner. This is especially true for the communication architecture and the selection of appropriate protocols. By using this approach, non-functional requirements can be addressed in the early beginning of the product development process which means no change (requests) later on.

Thursday, January 21, 2010

Failure is an option

I read about a conference on failing in the Silicon Valley last year (and I was fascinated by this way of thinking immideately). Well, I think this approach to tackle issues and mistakes is one success criterion which makes this exceptional high-tech valley that successful. Ten years ago, I was lucky to work for one year in Santa Clara / Sunnyvale. At this time, the Silicon Valley was the epicentre of the internet boom. I learned a lot about trying out new things and to be innovative in thinking and developing systems. I need to mention from time to time that software development is still a heuristic process. There are no tools to produce exactly that result in terms of code which is intended in the phase of project initialization. It (the mother of all tools) has been promised for years but it has not arrived yet. Sure, there are patterns, models, code fragments, IDE’s and many other helpful things but in the end it is up to a human beeing (the engineer) to compose and develop the solution. And this still works by trial and error in many cases. The most important lesson is to accept failure and to learn from it. This sounds easier than done and means a learning process for the whole team. But this is absolutely necessary in order to handle the complexity of computer science these days in a professional manner. One of my methods to take care of this is to document alternative solutions (to the choosen way of implementing it) for a given project and to outline the reasons for discarding. As an easy rule of thumb: In order to develop solutions successfully, we need to learn how to fail in the right ways.