Friday, December 19, 2008
Tuesday, December 16, 2008
ACID, BASE, CAP and the Clouds
There is nothing new about these acronyms and the paradigm change. Great articles can be found on the web (for instance Werner Vogel’s “Eventually consistent”) or other stuff especially available at ACM QUEUE. I just wanted to emphasize the need to check on this from the architectural perspective in order to answer the question: Is this application the appropriate choice to be deployed in the clouds or not? Or say it the other way around: Is BASE good enough for you? Or, is ACID essential for your system and the underlying requirements that are basically quality attributes? For many systems in the industrial domain (where data integrity and consistency is king) the answer might be not (or probably not yet). To answer these question correctly in order to adhere to such paradigms (or not), is essential because the clouds are already on the horizon.
Thursday, November 13, 2008
More Parallelism Support in upcoming IDE version
Monday, November 10, 2008
New Music on my turntable
- Metallica - "Death Magnetic" [****]
- Oasis - "Dig Out Your Soul" [***]
- ACDC – "Black Ice" [*****]
Please note, ACDC gets five stars [*****] by default.
Sunday, November 09, 2008
Mauerfall und Schwarze Schwäne
Saturday, November 08, 2008
Wednesday, November 05, 2008
Concurrency Aspects
Tuesday, November 04, 2008
A brief history of Access Control
Monday, November 03, 2008
Security and Virtual Machines, Part II
Monday, October 27, 2008
Sunday, October 26, 2008
Book Recommendation
Saturday, October 18, 2008
Thursday, October 16, 2008
It's already autumn
Monday, October 06, 2008
Surf Globally, Store Locally?
Sunday, September 28, 2008
Tuesday, September 16, 2008
Security and Virtual Machines
Other areas of security are affected as well. Cryptography is just one example. I’m gonna cover this fascinating topic in my upcoming posts. And, virtualization has started to exist in the clouds. How is security performing high above us in totally virtualized solutions? Mmmh.
Sunday, September 14, 2008
Wednesday, September 10, 2008
Web Browser, Web-OS and the Era of Clouds
Tuesday, September 09, 2008
Buchempfehlung
Wednesday, September 03, 2008
A Tribute to Jim Gray
Ct - Parallel Extensions to C++
Monday, August 25, 2008
No short term relief of multi-core programming issues available
Even the business world has identified the current status as a problem. The Fortune magazine addresses the topic in the last issue with an interesting article - A chip too far? The article is about risks and opportunities, and a Stanford professor describes the situation as a crisis – probably yes, probably not. It is definitely a huge chance for skilled programmers and people with the right ideas. I do agree with one statement totally – after years of abstractions in terms of platforms and languages, the complexity and hardware dependencies of multi-core architectures increase the learning curve for an average programmer dramatically.
Tuesday, August 19, 2008
Wrapping-Up Black Hat
- DNS issues, DNS issues, DNS issues
- Web 2.0 (and also “Web-OS”) security problems
- Flaws in Software-update mechanisms via the internet
- Issues resulting in weak number generation
Sunday, August 10, 2008
Black Hat USA 2008 ...
Tuesday, July 29, 2008
Top 44
1964: The Kinks, The Kinks
1965: Byrds, Mr Tambourine Man
1966: Beach Boys, Pet Sounds
1967: The Doors, The Doors
1968: Beatles, The White Album
1969: The Band, The Band (Brown Album)
1970: Black Sabbath, Black Sabbath
1971: Led Zeppelin, Led Zeppelin IV
1972: Deep Purple, Made in Japan
1973: Pink Floyd, Dark Side of the Moon
1974: Lynyrd Skynyrd, Second Helping
1975: Patti Smith, Horses
1976: Eagles, Hotel California
1977: Sex Pistols, Never Mind the Bollocks, …
1978: Bob Seger, Stranger in Town
1979: Frank Zappa, Joe’s Garage
1980: AC/DC, Back in Black
1981: Gun Club, Fire of Love
1982: Scorpions, Blackout
1983: Tom Waits: Swordfishtrombones
1984: Judas Priest, Defenders of the Faith
1985: Dire Straits, Brothers in Arms
1986: Metallica, Master of Puppets
1987: U2, Joshua Tree
1988: Lou Reed, New York
1989: Faith No More, The Real Thing
1990: Midnight Oil, Blue Sky Mining
1991: Nirvana, Nervermind
1992: Alice in Chains, Dirt
1993: Melvins, Houdini
1994: Oasis, Definitely Maybe
1995: Neil Young (with Pearl Jam), Mirror Ball
1996: Soundgarden, Down on the Upside
1997: Bob Dylan, Time out of Mind
1998: Queens Of The Stone Age, Queens Of The Stone Age
2000: Johnny Cash, American III: Solitary Man
2001: REM, Reveal
2002: Bruce Springsteen, The Rising
2003: Calexico, Feast of Wire
2004: Wilco, A Ghost is born
2005: Audioslave, Out of Exile
2006: Tool, 10.000 Days
2007: Foo Fighters, Echoes, Silence, Patience & Grace
Monday, July 14, 2008
Key Success Criterions in Software Development
- Tame complexity: complexity kills any system; over-engineering leads to a system that is not maintainable any more and just extensible by wrappers and such nasty type of things
- Requirement Management: make sure that a proper requirement management is in place; either by using the well-known (and rarely used thoroughly) principles [complete, traceable, testable, …] or by newer, agile processes
- Software & System Architecture: a well defined and described architecture must be in place and communicated to the whole team
- Plan for change and failure: we live in a world of constant change, and the same is true for a software projects; this must be addressed in our way of creating software; in addition, failure is an option, complex system are hard to comprehend and error-prone; we must accept this and should strive to develop a strategy to deal with this fact in an open manner
Sunday, July 13, 2008
Friday, July 11, 2008
Domain Specific Languages and Software Factories
Friday, July 04, 2008
Handling Non-Functional Requirements with Utility Trees
Sunday, June 22, 2008
Saturday, June 21, 2008
More is about to come ...
Wednesday, May 21, 2008
Indirection #2
Sunday, May 18, 2008
Thursday, May 15, 2008
Limitations of today’s Parallelization Approaches
Thursday, May 08, 2008
Amdahl’s Law
Speedup = 1 / (1-P) + P/N
You can play around with this formula. In essence, the sequential part of the code must be minimized to gain speedup. And, even a high number of processors does not have that impact when the percentage of parallel code is low. Anyhow, it’s a formula. In practice, the speedup depends also on other conditions, the communication (mechanisms) between the processors and the cache [Because Cache is king! ;-)].
Monday, May 05, 2008
Wednesday, April 23, 2008
Computing in the clouds
Another great thing is Cloud Computing as currently (21-April-2008) covered by an article on wired.com. It’s a must read! The company offers computing capabilities for everyone. This is good news for young enterprises that don’t want to spend too much on hardware. In essence, it is a lesson and a serious example at the same time how to offer computing power (and hardware, of course) as a service. I strongly believe we can expect much more from Amazon in the future. These guys are working on a real new computing platform on the web. The vision that the network is the computer, as verbalized by other companies, might turn into reality …
Tuesday, April 22, 2008
Transactional Memory …
Talking about transactional memory, this concept is not that new but not part of mainstream development platforms. Not yet, but transactional memory could be available in hardware and software soon. Basically, software-based transactional memory solutions do already exist. The concept is similar to mechanisms used in database systems controlling concurrent access to data. A transaction in the scope of software-based transactional memory is a section of code that covers a series of reads and writes to shared memory. Logically, this occurs at a single point in time. In addition, a log and a final operation, called commit, are also part of the concept. Sounds familiar? Published concepts (of software-based transactional memory) using an optimistic approach free of locks. It is up to the reader in the scope of the transaction to verify based on the logs that data in the shared memory has not been altered (by other threads). If so, a roll-back will be performed and give him another try …
This sounds really simple and desirable, right? What’s the drawback, why not using this technology for solving existing problems in parallel computing? It is the performance, decreased by maintaining logs and doing the commits. But those issues should be solved soon. It would make life easier pertaining to the understanding of multi-threading code and could prevent the pitfalls coming with lock-based programming. Each transaction could be perceived as a single-threaded, isolated operation.
Tuesday, April 15, 2008
UML and Domain Specific Languages
Tuesday, April 08, 2008
Prioritize your non-functional requirements
Monday, April 07, 2008
F Sharp (F#) – a functional programming language in the scope of the .NET ecosystem
Sunday, April 06, 2008
Friday, March 28, 2008
Use-Cases und Problemstellungen im Bereich Nebenläufigkeit und Parallelisierung
Was hat das mit Parallelisierung zu tun? Im Prinzip gilt auch hier der Ansatz: was ist die konkrete Anforderung und welche Use-Cases (Anwendungsfälle) lassen sich daraus ableiten? Wobei man die Zielplattform (Systemarchitektur) nicht vernachlässigen darf. Hier eine (nicht vollständige) Liste von Fragen, mit Hilfe derer man einer Lösung näher kommen kann:
- Wie sieht die HW-Architektur der Zielplattform aus (Schlüsselworte: Hyprerthreading, symmetrische oder asymmetrische MultiCore-CPU’s, Shared Memory, Distributed Memory)?
- Welches Betriebssystem mit welchen Laufzeitumgebungen und Programmiersprachen werden auf der Zielplattform eingesetzt? Dabei ist das Betriebssystem immer mehr zu vernachlässigen, da präemptives Multitasking heutzutage Allgemeingut ist und damit zumindest parallele Prozessausführung gegeben ist.
- Ist Performance die wichtigste nicht-funktionale Anforderung? Mit Performance sind in diesem Zusammenhang Datendurchsatz und Verarbeitungsgeschwindigkeit gemeint. Hier geht es auch um den Fakt, dass die Zeiten höherer Taktraten vorbei sind und der Cache leider nicht die Lösung aller Probleme darstellt.
- Haben die betreffenden Use-Cases einen synchronen oder asynchronen Charakter?
- Wie soll der Datenaustausch zwischen Ausführungsströmen (ich verwende diesen Begriff hier ganz bewusst) erfolgen und in welchem Umfang? Benötige ich Zugriff auf „shared“ Ressourcen und in welchem Umfang?
Ausgehend von der Beantwortung dieser Fragen kann eine Architektur entwickelt werden, die den Anforderungen genügt und die auf die richtigen Lösungen setzt. Richtig bedeutet in diesem Zusammenhang auch, dass die passenden Techniken verwendet werden um weitere nicht-funktionale Anforderungen wir Wartbarkeit, Erweiterbarkeit, Testbarkeit und Einfachheit (!) zu erfüllen. Ich möchte und kann an dieser Stelle keine Entscheidungsmatrix liefern; werde aber kurz ein paar Lösungsmöglichkeiten skizzieren.
- Wenn Punkt 3 das entscheidende Kriterium ist, sollte eine gute Auslastung der zur Verfügung stehenden Rechenkerne das Ziel sein. Es gibt in diesem Fall immer noch die Möglichkeit, eine Lastverteilung der Prozesse als Lösung zu wählen. Sollte dies aufgrund der Problemstellung (Arithmetik, etc.) nicht möglich sein, muss der Weg über eine Verteilung der Threads gefunden werden. Hier bietet sich beispielsweise OpenMP an. Natürlich kann sich Punkt 3 auch mit Punkt 5 überlagern, wenn auf viele „shared“ Variablen zugegriffen werden muss. Es ist anzumerken, dass OpenMP uns nicht vor Race-Conditions oder Dead-Locks bewahrt.
- Bezüglich Punkt 4 gibt es sicherlich die am weitesten entwickelten Lösungsmöglichkeiten und gute Chancen, diese Problemstellung umfassend zu lösen. Eine gute Entkopplung sowie asynchrone Kommunikationsmechanismen sind hier die Erfolgkriterien. Konkrete Anwendungsfälle existieren vor allem im Umfeld von Benutzerschnittstellen (UI’s) auf Clientsystemen. Ausführungsströme (z.B. Threads) in der Präsentationslogik sollten von der Geschäftslogik mithilfe geeigneter asynchroner Kommunikationsmechanismen getrennt werden, um gezielt eine lose Kopplung zu erreichen.Ich möchte noch anmerken, dass die Aufgabenstellung in Punkt 5 (Asynchronität) für mich nicht originär zur Problemdomäne Parallelisierung / Nebenläufigkeit gehört, in der Diskussion aber dort oft eingeordnet wird.
- Punkt 5 ist sicherlich die komplizierteste Aufgabenstellung, da bisher nur wenig Unterstützung durch Entwicklungsumgebungen, Compilern und Frameworks vorliegt. Bis es diese gibt (beispielsweise auf der Grundlage von Konzepten des „Transactional Memory“) muss man noch mit all den Schwierigkeiten des „Lock-Free Programming“ leben. Dazu gehört natürlich ein ausgefeiltes Testkonzept, um Dead-Locks und Race-Conditions auf die Spur zu kommen. Generell kann ich hinsichtlich Punkt 5 nur empfehlen, Architektur und Design hinsichtlich alternativer Lösungsmöglichkeiten des Datenaustausches genau zu prüfen.
Natürlich ist anzumerken, dass es auch einen Mix an Lösungen geben kann. Als konkretes Beispiel im Hochleistungsrechnen sind Hybridanwendungen aus MPI (Message Passing Interface) und OpenMP zu nennen.
Tuesday, March 25, 2008
Dogmushing
More Music is about to come ...
More OpenMP Details (Data Scopes, Critical Sections)
If the scope is not specified, shared will be the default. But there are a couple of exceptions: loop control variables, automatic variables within a block and local variables in called sub-programs.
Let’s talk about another important directive, the critical directive. The code block enclosed by
#pragma omp critical [(name)] will be executed by all threads but just by one thread at a time. Threads must wait at the beginning of a critical region until no other thread in the team is working on the critical region with the same name. Unnamed critical directives are possible. This leads me to another important fact that should not be overlooked. OpenMP does not prevent a developer to run into problems with deadlocks (threads waiting on locked resources that will never become free) and race conditions where two or more threads access the same shared variable unsynchronized and at least one thread modifies the shared variable in a concurrent scenario. Results are non-deterministic and the programming flaws are hard to find. It is always a good coding style to develop the program in a way that it could be executed in a sequential form (strong equivalency). This is probably easier said than done. So testing with appropriate tools is a must. I had the chance to participate in a parallel programming training (btw, an excellent course!) a couple of weeks ago where the Thread Checker from Intel was presented and used in exercises. It is a debugging tool for threaded application and designed to identify dead-locks and race-conditions.
Saturday, March 22, 2008
Google Phone
Wednesday, March 19, 2008
Just a recommendation
The phenomenon is that any other company, especially the software giant located in Redmond, would have been ended with endless criticism, if they had done their products this way. Another interesting view is mentioned briefly, the so-called “three-tiered systems” consisting of blend hardware, installed software and proprietary Web application. Let’s see how this model is doing in the future because this is the antithesis to the Open Source / Open System approach.
Green IT and Parallel Programming
Monday, March 17, 2008
Erich Kästner and Dresden
Friday, March 14, 2008
More Music …. (Music is the best)
Some More Details on OpenMP
Wednesday, March 12, 2008
Nebenläufigkeit
Multi/Many-Core, Multi-Threading und Parallelisierung
Die Zeit des Geschwindigkeitsrausches ist vorbei, zumindest was die Prozessoren betrifft, die heutzutage in Workstations, Servern und Laptops verbaut werden. Höhere Taktraten sind aufgrund der geltenden physikalischen Gesetze nicht mehr DIE Lösung für eine bessere Performance (neben der Optimierung der Instruktionen und des Cash-Speichers). Dafür werben die Hersteller mit neuen Produkten wie Dual-Core, Quad-Core, etc.. In diesen Lösungen sind mehrere (Haupt)Prozessoren auf einem einzigen Chip untergebracht und das in einer symmetrischen Art und Weise. Das heisst, die Kerne sind identisch und können dieselben Aufgaben erfüllen. Die einzelnen Kerne erhalten dabei einen eigenen Cash, der weitere Möglichkeiten der Performanceoptimierung bietet. Okay, das war die Hardware. Leider kann die Software diese neuen Möglichkeiten nicht in jedem Fall per se nutzen; sie muss darauf vorbereitet sein. Der Weg dahin ist steinig und kompliziert und wird leider durch die existierenden Compiler, Entwicklungsumgebungen und Frameworks nicht genügend unterstützt. Hier muss man zwischen den existierenden Programmiersprachen und Laufzeitumgebungen (Java, .NET) unterscheiden, die verschiedene Ansätze bieten. Dazu kommen Frameworks wie OpenMP und MPI, die in der Welt des Hochleistungsrechnens schon länger existieren. Neuerdings denkt man in dem Kontext der Parallelisierung auch wieder über funktionale Programmiersprachen nach. Wie man leicht erkennen kann, ist die Lernkurve für Entwickler hoch und die Fehlerquellen sind zahlreich. Dazu kommt, dass es oft Unschärfen in den Begrifflichkeiten gibt, vor allem wenn es um Multi-Threading, Hyper-Threading und Multi/Many-Core sowie Multi-Tasking (im Rahmen von Betriebssystemen) geht. Oft wird die Parallelisierung in der Softwareentwicklung mit dem Meilenstein Objektorientierte Programmierung verglichen. Dieser Vergleich ist durchaus realistisch. Ich befasse mich auf meinem Blog mit dem Thema Parallelisierung und möchte dabei Lösungen und aktuelle Trends aufzeigen.
Tuesday, March 11, 2008
Shared Memory, Multi/Many-Core and OpenMP
Friday, March 07, 2008
Application Design, Multi/Many-Core and Other Basics
Thursday, March 06, 2008
Music is the best ...
Tuesday, March 04, 2008
Some Annotations
I would like to address another comment I received recently. Of course, before considering any parallelization activities, two preliminary steps are a must:
- Optimization of the numeric (actually the code)
- Optimization of the cache (because cache is always king!)
Both activities might lead to the expected performance increase without diving into complex parallelization efforts in an overhasty manner.
Friday, February 29, 2008
More Concurrency
Raising Awareness
Tuesday, February 26, 2008
OpenMP and MPI
Sunday, February 24, 2008
Ein echtes Highlight
Tuesday, February 19, 2008
Parallelism in Managed Code
Sunday, February 03, 2008
New Blog Pointer
Saturday, February 02, 2008
Book Recommendations
Holiday Season and one week skiing have offered some extra time for reading (as usually). I got four recommendations from two different areas of interest: mountaineering/climbing and technology (not coding!). Here we go:
- Alexander Huber: Der Berg in mir. Klettern am Limit. An extraordinary book from an extraordinary guy. Alex is one part of the famous Huber Buam. His climbing record is unbelievable.
- Reinhold Messner: Die weiße Einsamkeit. Mein langer Weg zum Nanga Parbat. The Diamir (Nanga Parbat) is his mountain of destiny. Reinhold Messner is the greatest Alpinist of all times. This book is breathtaking and it is telling a story of tragedy and courage.
- Scott Berkun: The Myth of Innovation This book is a must for all people having dreams and an innovative mindset. Yes, yes, yes innovation needs a special environment to foster cool and smart ideas.
- Donald A. Norman: The Design of Everyday Thingseverything is so true. Hey, but why we are still surrounded by so many goods and devices of poor design and usability? This book was written in 1988!
In addition, I need to post two DVD’s for all folks in love with outdoors, wilderness and the mountains:
- Der letzte Trapper [Director: Nicolas Vanier]
- Am Limit (The Huber Buam, again!) [Director: Pepe Danquart]
Monday, January 28, 2008
Some sort of Indirection
“genericcontainerwrappingadaptergatewayframeworkolalasolutions” perfectly.