Open-source software assessment methodologies

Several methods have been created to define an assessment process for free/open-source software. Some focus on some aspects like the maturity, the durability and the strategy of the organisation around the open-source project itself. Other methodologies add functional aspects to the assessment process.

Existing methodologies edit

There are more than 20 different OSS evaluation methods.[1]

  • Open Source Maturity Model (OSMM) from Capgemini
  • Open Source Maturity Model (OSMM) from Navica[2]
  • Open Source Maturity Model (OSSMM) by Woods and Guliani[3]
  • Methodology of Qualification and Selection of Open Source software (QSOS)
  • Open Business Readiness Rating (OpenBRR)
  • Open Business Quality Rating (OpenBQR)[4]
  • QualiPSo[5]
  • QualiPSo Model for Open Source Software Trustworthiness (MOSST)[6][7]
  • Towards A Trustworthiness Model For Open Source Software: How to evaluate Open Source Software[8]
  • QualOSS – Quality of Open Source[9]
  • Evaluation Framework for Open Source Software[10]
  • A Quality Model for OSS Selection[11]
  • Atos Origin Method for Qualification and Selection of Open Source Software (QSOS)[12]
  • Observatory for Innovation and Technological transfer on Open Source software (OITOS)[13]
  • Framework for OS Critical Systems Evaluation (FOCSE)[14]

Comparison edit

Comparison criteria edit

Stol and Babar have proposed a comparison framework for OSS evaluation methods. Their framework lists criteria in four categories: criteria related to the context in which the method is to be used, the user of the method, the process of the method, and the evaluation of the method (e.g., its validity and maturity stage).

The comparison presented below is based on the following (alternative set of) criteria:

  • Seniority : the methodology birth date.
  • Original authors/sponsors : original methodology authors and sponsoring entity (if any)
  • License : Distribution and usage license for the methodology and the resulting assessments
  • Assessment model :
    • Detail levels : several levels of details or assessment granularity
    • Predefined criteria : the methodology provides some predefined criteria
    • Technical/functional criteria : the methodology permits the use of domain specific criteria based on technical information or features
  • Scoring model :
    • Scoring scale by criterion
    • Iterative process : the assessment can be performed and refined using several steps improving the level of details
    • Criteria weighting : it is possible to apply weighting on the assessed criteria as part of the methodology scoring model
  • Comparison : the comparison process is defined by the methodology

Comparison chart edit

Criteria OSMM Capgemini OSMM Navica QSOS OpenBRR OpenBQR[4] OpenSource Maturity Model
Seniority 2003 2004 2004 2005 2007 2008
Original authors/sponsors Capgemini Navicasoft Atos Origin Carnegie Mellon Silicon Valley, SpikeSource, O'Reilly, Intel University of Insubria QualiPSo project, EU commission
License Non-free license, but authorised distribution Assessment models licensed under the Academic Free License Methodology and assessments results licensed under the GNU Free Documentation License Assessments results licensed under a Creative Commons license Creative Commons Attribution-Share Alike 3.0 License Creative Commons Attribution-Share Alike 3.0 License
Assessment model Practical Practical Practical Scientific Practical Scientific
Detail levels 2 axes on 2 levels 3 levels 3 levels or more (functional grids) 2 levels 3 levels 3 levels
Predefined criteria Yes Yes Yes Yes Yes Yes
Technical/functional criteria No No Yes Yes Yes Yes
Scoring model Flexible Flexible Strict Flexible Flexible Flexible
Scoring scale by criterion 1 to 5 1 to 10 0 to 2 1 to 5 1 to 5 1 to 4
Iterative process No No Yes Yes Yes Yes
Criteria weighting Yes Yes Yes Yes Yes Yes
Comparison Yes No Yes No Yes No

See also edit

References edit

  1. ^ Klaas-Jan Stol, Muhammad Ali Babar. "A Comparison Framework for Open Source Software Evaluation Methods" published in OSS 2010 proceedings. IFIP AICT vol. 319, pp. 389-394.
  2. ^ Zahoor, Adnan; Mehboob, Khalid; Natha, Sarfaraz (September 2017). "Comparison of Open Source Maturity Models". Procedia Comput. Sci. 111 (C): 348–354. doi:10.1016/j.procs.2017.06.033. ISSN 1877-0509.
  3. ^ Woods, D., Guliani, G.: Open Source for the Enterprise: Managing Risks Reaping Rewards. O’Reilly Media, Inc., Sebastopol (2005)
  4. ^ a b Taibi, Davide; Lavazza, Luigi; Morasca, Sandro. OpenBQR: A framework for the assessment of OSS (PDF). In International Conference on Open Source Development, Adoption and Innovation. pp. 173–186. doi:10.1007/978-0-387-72486-7_14. Archived from the original (PDF) on 1 January 1970.
  5. ^ "CORDIS | European Commission".
  6. ^ V. Del Bianco, L. Lavazza, S. Morasca, D. Taibi and D. Tosi.: Quality of Open Source Software: The QualiPSo Trustworthiness Model. In: proc. 5th IFIP WG 2.13 International Conference on Open Source Systems, OSS 2009, Skövde, Sweden, June 3–6, 2009. Proceedings
  7. ^ V. Del Bianco, L. Lavazza, S. Morasca, D. Taibi and D. Tosi.: The QualiSPo approach to OSS product quality evaluation. ACM/IEEE, In Proceedings of the IEEE International Workshop on Free Libre Open Source Software (FLOSS) – Colocated with ICSE, 2010
  8. ^ D.Taibi: Towards a trustworthiness model for Open Source Software:How to evaluate Open Source Software. LAP LAMBERT Academic Publishing. 2010. ISBN 3844389687
  9. ^ "QualOSS – CETIC". cetic.be. European Commission. Retrieved 11 January 2019.
  10. ^ Koponen, T., Hotti, V.: Evaluation framework for open source software. In: Proc. Software Engineering and Practice (SERP), Las Vegas, Nevada, USA, June 21–24 (2004)
  11. ^ Sung, W.J., Kim, J.H., Rhew, S.Y.: A Quality Model for Open Source Software Selec- tion. In: Proc. Sixth International Conference on Advanced Language Processing and Web Information Technology, Luoyang, Henan, China, pp. 515–519 (2007)
  12. ^ Atos Origin: Method for Qualification and Selection of Open Source software (QSOS) version 1.6, Technical Report (2006)
  13. ^ Cabano, M., Monti, C., Piancastelli, G.: Context-Dependent * Evaluation Methodology for Open Source Software. In: Proc. Third IFIP WG 2.13 International Conference on Open Source Systems (OSS 2007), Limerick, Ireland, pp. 301–306 (2007)
  14. ^ Ardagna, C.A., Damiani, E., Frati, F.: FOCSE: An OWA-based Evaluation Framework for OS Adoption in Critical Environments. In: Proc. Third IFIP WG 2.13 International Conference on Open Source Systems, Limerick, Ireland, pp. 3–16 (2007)

External links edit