QSOS and openBRR (Lightweight methodologies) – Gaps and impovements
Making a comparison between QSOS and OpenBRR would fill a stack of papers books ,and blog posts. The aim of this post is to resume the main points and characteristics of its method and finally find possible gaps and vulnerabilities. So let’s start guys!
The main points of OpenBRR’s Wikiperdia definition are :
- Open source software assessment methodology
- Offers reduction of the Total Cost of Ownership
- Currently is at a RFC stage
- Methodology sponsors : Carnegie Mellon West Center for Open Source Investigation, CodeZoo, SpikeSource and Intel.
Taking in more technicaly way , what openBRR offers ?
- 4 Phases/Levels of Software Assessment ( Quick Assessment , Target User Assessment , Data Collection and Processing , Data Translation)
- 8 classified criteria-metrics (Usability , Quality, Security, Performance , Scalability, Architecture, Support , Documentation)
- Criteria is categorized into a tree hierarchy of 2 levels.
QSOS definition (by it’s community ) is :
” QSOS is a method conceived to qualify, select and compare free and open source software in an objective, traceable and argued way. It is made available to all, under the terms of the GNU Free Documentation Licence ” . Furthermore QSOS provide a set of tools and editors in order to create your own criteria template (Template Editor , Sheet Editor , O3S , QSOS Engine, CVS Repository ). So it’s easy to find differences between openBRR and QSOS. Tecnicaly QSOS contains :
- 4 steps (as a part of an iterative process): (Define , Access, Qualify , Select)
- 5 classified criteria-metrics( Intrinsic durability , Industrialised solution , Integration , Technical adaptability , Strategy )
- Criteria is categorized into a tree hierarchy of 3 levels.
- Documentation , and more info available to the user.
Comparison and vulnerabilities
No matter that a comparison could last , in my opinion a brief comparison and vulnerabilities detection is always useful when talking about software (but not only). So differences always become when there no similarities .
1)Each methodology proposes a predefined set of criteria for evaluating FlOSS projects.
2)Evaluation means scoring the various criteria based on a standard scoring procedure. During the evaluation of a given FlOSS project, this step results in as- signing score to each criterion (always score as absolute).
3)During an evaluation, the absolute scores are weighted, by the users , based on their importance to the current evaluation context (weighted absolute scores as relative scores).
4)Decision can be taken based on the resulting relative scores.
1) The order shown below represents the QSOS method.
2) OpenBRR suggests inverting point 2 and 3 so that users first select criteria relevant to their context and therefore
avoid scoring useless ones. Furthermore OpenBRR allows the creation of new criteria as well as the tailoring of the scoring procedure for criteria.
3) QSOS believes that the absolute scores obtained when applying the scoring procedures are universal. Hence, the scoring procedure for a particular version of a FlOSS Comparing Assessment Methodologies project only takes place once.
4 ) OpenBRR is a standard methodology but it assumes that every user instantiates it in a slight different way.
5) OpenBRR is at RFC stage where QSOS provide a set of tools and criteria-templates.
6) OpenBRR has famous sponsors [and also developed] (Carnegie Mellon West Center for Open Source Investigation, CodeZoo, SpikeSource, Intel ). On the other hand QSOS created by Atos Origin and is a community based project.
7) QSOS provides 5 classified criteria-metrics where OpenBRR provides 8 classified criteria-metrics.
8) QSOS provides “rich” documentation and a very well organised web page for user. Besides OpenBRR only provides a “poor” web site.
1) No matter provides a very useful set of tools , O3S criteria for “Software families” are only available in French language (in the project’s web page)
2) QSOS tree hierarchy of criterias make it more complicated compared with OpenBRR tree hierarchy.
3) Not many business support of this method, where OpenBRR is developed and sponsored by notable companies.
1) Absence of tools and abilities to make your own criteria in an easy and fast way.
2) Is at RFC state, where QSOS provides tools/sets for the user and is a community based project.
As a conclusion of this article , i would like to mention a disadvantage in common (of QSOS and OpenBRR) : Different criteria bring to us different scores, different scores bring to us different way of appliances. Is that a serious problem? Not always , sometimes becomes an advantage and sometimes a disadvantage. Let’s see :
The advantage is that each model provides it’s own criteria and it’s own iterative process to evaluate,edit and etc scores and data. The more available approaches for a software the better is. On the other hand , the absence of a common-model (QUALOSS as defined by Jean-Christophe Deprez and Simon Alexandre) make the decision process more difficult and complicated. Finally the different approach and absence of a scalable based common-model sounds like the most possible answer in the question below :
” Would you buy a house constructed by a very famous company almost without mentioned your wishlist or would you construct your own house using “community” tools but defining your wishlist could be difficult process?”
No more thoughts , no more doubts….