[Evaluate Open Source Software] OSXP2024 – The art of evaluating open source: the views of free software experts

The video of the conference is available on Youtube.

Transcript of the Open Source Software Evaluation Conference

Walid Nouh: Hello everyone, thank you for being here. My name is Walid Nouh, I am the founder of a podcast called Projets Libres!. Today, I am very fortunate to be able to host a round table where we will talk about the evaluation of free software. With me, we have four guests. You will see different profiles who each have different interests in evaluating free software. The first person is Lwenn Bussière who is a Technology Assessor at NLnet. We’ll explain what NLnet is later. Thierry Aimé who is in charge of free software at the Ministry of Finance. Benjamin Jean who is the CEO of Inno3 and Raphaël Semeteys who is a senior architect at Worldline. I will give each of you the floor to introduce yourselves and explain a little bit about your structure, who you are and what your structure is. Louane, would you like to start please?

Presentation of the speakers

Lwenn Bussière: Of course. Thank you for inviting me Walid. Hi everyone. My name is Lwenn and I work at NLNet, which is a foundation in the Netherlands that gives funding between 5,000 and 50,000 euros to free and open source software projects. Essentially this funding comes from the European Commission, from the NGi Zero program, which is a cascading program of funding from the European Commission and being distributed by… an organization, in this case NLNet. And so, during our work, we evaluate applications from hundreds of free software, hardware, open source projects, and firmware, so not only software development, seeking European funding.

Walid Nouh: Benjamin?

Benjamin Jean: Hello everyone, can you hear me? It’s a little intimidating, but very good. I feel like I’m whispering. So, nice to meet you, Benjamin Jean, I am the president and founder of Inno3, which is a small structure. There are less than 10 people. In fact, we are mainly focused on everything that is the definition and implementation of open source policy strategy. I’m very brief, but contexts in which these questions of evaluating open source are often found. When is it relevant, on the one hand, to distribute your own projects in open source? And when is it relevant to use other people’s projects and according to what criteria. Quickly, we also participated very recently in the foundation of another structure called Open Source Experts. It is a group of open source companies that aims to offer grouped answers from open source players for large markets. It is also in this logic that I find myself here.

Walid Nouh: Raphaël, if you take a microphone.

Raphaël Semeteys: thank you. Thank you for the microphone too, hello everyone. I’m Raphaël Semeteys, I’ve been working with and around open source for quite some time. Previously at Atos, for 10 years, I was in the open source competence center where we supported customers in their adoption, their reflection around open source. And then for about 10 years too, I’ve been at Worldline where we use a lot of open source components to build our own solution, especially in the field of payment.

Walid Nouh: Thierry.

Thierry Aimé: And so I’m Thierry Aimé, I work at the Ministry of Finance, specifically at the General Directorate of Public Finances, in the Office of Architecture and Standards. As part of my duties, as head of open source choices, free software for my department, we use a lot of free software and we are regularly led to question the quality of the software that could be deployed to meet the identified needs. And otherwise, more broadly, I take care of the free software support and expertise market. It is a market that is for the benefit of all central administrations. All ministries benefit from it. And it’s a market where we share our problems, our questions about the identification of free software. In particular, we have opportunity studies, studies that we do regularly, strategic or technical studies to identify this free software.

Why evaluate open source software?

Walid Nouh: Very well, so if we now start to go into a little more detail, the first thing we can ask ourselves is what is the need, why do we want to evaluate free software? You will see that everyone has very different answers depending on the structure in which they work. So we’ll start by talking about why you need to evaluate free software. Loen, can you explain a little at NLnet why you need to evaluate free software?

Lwenn Bussière: Yes, of course. Essentially, this is our mission since we provide research and development funding. So, we really work upstream, that is to say that we receive applications from young and old, independent developers, companies, universities, who will propose a project, generally a brand new project and very often cutting-edge research. And for… Yes, to finance the research and development of new open source projects. And this research and development funding is also addressed to all levels of the European Internet infrastructure. For us, it’s important that users have control over their experience, the ability to self-host, the ability to have control over your data, the ability to know what’s going on in the code and to have solutions that are accessible and secure for all of us. These are criteria that are of particular interest to us. So we work a lot upstream on new research projects.

Walid Nouh: Do you receive a lot of applications?

Lwenn Bussière: yes,

we have calls for projects every two months and the last call for projects which closed on October 1st, we had 567 applications from all over Europe. So several hundred every two months. And we have financed, since the beginning of the NGI and NLnet programs, we have financed more than a thousand projects. I think we have passed the 1.5 thousand mark at that point.

Lwenn Bussière

Walid Nouh: Ok, so what’s interesting is that we have upstream evaluation before the projects are mature, on projects that are really a bit early stage. Benjamin, in your case at Inno3, what are the different needs that make you have, in which case you will need to evaluate free software?

Benjamin Jean: That’s why I took my PC. Finally, the question of evaluation is often raised, but not for the same reasons. The term is used perhaps incorrectly. There is, I think, in a fairly similar way, the question of evaluation from a valuation point of view, to ask, what type of project can be usefully disseminated in open source? On the one hand, what are the criteria that will be taken into account within a research center or an organization to decide on the opportunity to disseminate in open source? The modalities that are associated? There, there is an evaluation. There is also a posteriori evaluation. So here, it’s more to ultimately manage the valuation, but it’s also linked to the valuation. That is to say, this project that we released at first, what did it bring us in terms of the objectives we had set ourselves?

So it may be a little bit in line with what has been said, but in an internal context of the organization, so a little different. There is the question more related to the manner. In fact, when you work, when you define an open source policy internally, in the end, you can’t do it as if you were the only one in the world. That is to say, each organization must also interface with the policies of the other actors with whom it works, in particular suppliers. This is where we will come across public procurement issues. And so, there is this question of how we can translate into the markets we publish the issues we have regarding the use of open source internally and how we are going to evaluate the responses on this basis, especially if we impose open source solutions, for example, if we impose community solutions, if we impose, as the State has done, different companies, only projects that are not carried out by companies, but rather by non-creative organizations, for example.

Benjamin Jean

These are a few examples like that, but we need to be able to situate ourselves on this scale. And what else did I write down? Yes, in everything that is, so it finally ties in with what has just happened with NLnet, it’s in everything that is a call for commons. So it’s a term that we use a lot in France, which is also starting to be used in Europe, I think, more and more, but which is very close to what NLnet does. That is to say, the idea is to think about the evaluation of the project, but also of the community behind it. So there, it also joins the metrics. We’ll talk about the methods later. But how in these slightly different calls for projects, which aim to finance projects that on the one hand are open but on the other hand are maintained by communities that perpetuate the investment of public or private actors. There, there is an evaluation, and generally the simplest possible to then give rise to funding, even if smaller, smaller. And I’ll just end up with that. We are involved in a project, to show the diversity of situations in which we work on evaluation, a project called Hermine, an open source and open data project, in which what we are trying to introduce allows us to have such a complete vision of all the open source software that we use within our organization. For different reasons, the first is open source compliance, so that’s where the project came from initially. But there are other subjects that we are trying to graft onto this, in particular the question of sustainability. That is to say, if I use a lot of open source software within my organization, how do I evaluate the relevance of the choice of such software with regard to the criteria that are related to its community, legal quality, and many other things. And that’s our way of evaluating that is much more automatable. with standards and methods that are developing. But it’s also part of our concerns.

Walid Nouh: We’ll talk about it later, precisely in the methods too. Raphaël, on your side, at Worldline, you are more of an industrialist, why evaluate? What are the challenges for you?

Raphaël Semeteys: Indeed, we’re rather downstream, let’s say. Clearly, the challenge for a group like Worldline, which builds critical, highly regulated, very visible services, as soon as it breaks down, there is no more payment, that kind of thing, we need to have confidence in the components that we are going to select to build these services, whether they are open source or not. And so, in fact, it really starts with a risk analysis, that is to say what risk I take in terms of my ability to operate critical services, to select this component or that other component. And that’s what led to, and I’ll talk about it later, about the method that I helped create, about how to assess these risks. So it’s really as an end user, making sure that we’re going to select components that are sustainable, that will continue to meet needs, that will be patched at the level of security, that kind of thing.

Walid Nouh: You can go there. The microphone next to Thierry. So well, the ministry, you too.

Thierry Aimé: Yes, so we are users of a very large number of free software. It didn’t happen all at once, it came gradually at the turn of the 2000s. Free software has entered the direction.

And in fact, it is the result of a policy where systematically, it is a strategic policy to systematically consult the free offer for any new need before possibly turning to the proprietary offer. So when we turn to the free offer, we define a perimeter, we define a need and then often, very very often, there is not one solution that emerges but a multitude of solutions on a lot of subjects. There is a profusion of free software, it’s quite surprising, this diversity, this dynamic.

Thierry Aimé

And so the question that comes up very quickly is how to choose it? Many of these projects are sometimes personal projects, sometimes they are study projects, sometimes they are projects of publishers who in fact have a project that is very similar to that of proprietary publishers, that is to say that they are the only masters on board their solutions. We have to evaluate all these things and estimate the risks and what are the best solutions to deploy in our country, since the challenge when we deploy free software is not to do it for a few months but it is a question of sustainability over decades where we have to guarantee that this software will continue to be available, maintained and that we can even contribute to improving the software. If only if anomalies are detected, we must have recourse to the community to assert our corrections. So it’s really important for us to secure the use of free software by guaranteeing the initial choice.

How to evaluate open source software?

Walid Nouh: Very good. Let’s move on to the big part of the conference, which is the evaluation methods. We see that we have to evaluate, but then how do we evaluate? And there, there are lots of methods, there are lots of different ways of doing things. And again, depending on which organization, the methods are a little different. So here, I’m going to ask the question again this time. But Lwenn, how do you now, a little more in detail, how do you manage to evaluate all this mass of files that arrive and all these projects that come to you?

Lwenn Bussière: As the projects that are coming in are generally very young projects, and also because our application procedure is very simplified. By the way, if you have a project that you want to present for our next call (Editor’s note: call for projects), don’t hesitate. So, we have a very simplified procedure which is just a webform that takes maybe half an hour, an hour to fill out and where we encourage developers to forget about marketing and just explain to us what they do, why. and stop there. And so we receive these applications, each of which is one or two pages relatively short, and each time we need to do a lot, a lot of research for each of the projects we receive, since these are often projects that are not even created yet or very young. We’re going to take a look at the repo, we’re going to take a look at the different projects that exist in the same space, the different libraries that someone is building on, if it’s hardware projects, what are the solutions that exist, what are the schematics, what are the suppliers for the firmware, what are the platforms that are targeted, etc. So it’s very specific and it’s tailor-made for each of the applications we receive. We have some criteria that are very strict, since the projects must be European. There must be a European dimension. These are European funding, so there are rules. We look at the budget, we also look at the strict criteria. The project must be research and development. And then, we evaluate the projects on three criteria, only three, so compared. Benjamin’s method, which is very precise. It’s going to be much more of a process of taking notes and asking questions, since we’re looking at technical excellence. Is the solution adapted to the problem? Is the solution maintainable? Is that clear? Our main criterion is impact, relevance. Do we think that the problem makes sense? Or do we just think that it’s not necessarily a use case that interests us? And finally, we look at the budget, the organization, the schedule. Does it seem to be a project that is viable in terms of micro-grants between 5,000 and 50,000? Is this a strategy that is feasible over a year with the funding we have? And what are the aspects on which we can help and refine? So after this first approach, we make a pre-selection of the projects that interest us and then we will contact the people who have applied for the projects that interest us and ask them questions to refine the direction of their project. So this will be the time when we ask technical questions in detail, compare on other projects that exist, why create a new solution, why use this library rather than this one. After this exchange phase, which is often very enriching for us and for the people who apply, we make a second evaluation selection which will then be sent to an external committee that supervises our work. This is the life cycle of a project. Once we have selected a project, then we will work with them, we have several partners, associations, companies that will support the project with safety and accessibility audits, provide help to find business models that are sustainable, find technical writing and identify the needs of the projects. So we’re not just on the selection and here’s your pennies and go code, we’re really trying to be present. on all stages of development to be able to support and support the open source projects that we select as much as possible.

Walid Nouh: If you want to know more about NLnet funding, I refer you to the episode of the podcast Projet Libre which talks about NLNet , with Lwenn, on which we go back for more than an hour on how it works, how it works, what types of projects are funded, etc. It’s quite exciting. I’m a total fan of NLNet. I talk about it in almost all my episodes. So there you go, I’m not… Raphaël, can you please briefly talk a little about the method you created a long time ago now to do evaluation?

Raphaël Semeteys: yes of course thank you, 20 years ago in fact. The method is called QSOS, which stands for “qualification, selection, open source software”. Then it also exists in English, qualification etc. It’s a method that I created about twenty years ago when I was at Atos and we supported our customers in the choice of free open source software. And it’s something we were already doing internally. At the beginning, it’s all stupid, QSOS, it didn’t really invent hot water. It was making comparisons, a bit like what we found in the FNAC stuff, saying, “such and such a television, here are its technical characteristics, this is what it allows us to do”, and so we started to do that internally. Then afterwards, in my work of consulting my clients, in the support of their open source strategy, etc., we said to ourselves “but actually… This method could be open sourced itself.” And so, what does it consist of? As I said, it is a risk analysis at the outset. We will look at the project rather than the software itself to try to identify the risks that could be linked to the adoption of the project. So, maturity and sustainability, these will be legal aspects, they will be aspects, is there governance? How is the community organized? Are the contributors all part of the same company? How many are there? Is this kind of thing industrialized? Is there patch management? There is a whole battery of criteria that is standard, that is defined in the method itself and that is applied systematically, regardless of the type of software. Then, there is another set of criteria that will depend on the family of software considered in order to be able to compare solutions with each other. If I take BI solutions, dependency injection, back, front, etc. We will define grids of criteria.

So it’s organized in the form of trees that we’ll be able to, on the basis of these grids, we’ll be able to evaluate solutions. But what you have to understand is that when we decided to open source the method, what we said to ourselves was, what we would like to do is to do collaborative technology watch, therefore community-based. Because from the moment we delivered studies, for example to the Ministry of Finance or to other clients, from the moment we delivered the study, it becomes obsolete, since the software and the projects continue to evolve. And so we said to ourselves “it’s a shame, there is a loss of energy that is enormous, while we analyze communities that come together to create value. So can’t we also create value by sharing the monitoring and distributing the monitoring effort?” And that’s what explains a little bit how QSOS is organized. So I may not go into details right now, maybe we’ll come back to it a little later.

But there is one point that is very important, which is that we want to dissociate the activities of creating an evaluation grid from that of using an evaluation grid that I have or have not made, that someone else has created, to evaluate a software. And eventually, another person, a third person, can use evaluations that were not made by him to compare them in his given context. And that’s what explains a little bit how the method is organized and the fact that, in particular, we have a scoring system that is as simple as possible by saying a criterion, in the end, it’s scored on 0, 1, 2: 0, it doesn’t do. 2, it completely covers what is described in the criterion. And 1, we are in something that is partial, that is intermediate.

Raphaël Semeteys

And to try to do things as objectively as possible, so that it can be used by others, who, when they take ownership of several evaluations and want to make a choice, for example, I’m coming back to BI frameworks, etc., they will look at the maturity-sustainability part, indeed, because it’s very important. But they will also be able to weigh and say, me, in my context, therefore model its context, form of weight. that will apply to these different trees and criteria grids. That’s how we articulate things and tell ourselves that we’re going to produce value that will be used by others. From there, we have a format and we can generate reports, etc. That kind of thing. It’s okay, I think I’ve been too long.

Walid Nouh: No, it’s fine. I think the logical next step is to pass the microphone to Thierry to explain how they use this method.

Thierry Aimé: Indeed, this method was identified almost at the time of its creation. We were perhaps one of the first to take an interest in it, at least outside of Atos. And so at that time, we were already regularly making free software choices. And I looked at the possibilities at the time that existed. I was just saying that there was a solution pushed by Intel to evaluate free software in the early 2000s, but it was very complex with a lot of criteria, sub-criteria and very fine ratings to implement. It didn’t necessarily seem very practicable to me. And what I found interesting at the time, and which was confirmed by the way, is that the QSOS method was not bad in the sense that it didn’t split hairs too much. It was quite simple to implement and we managed to come up with a fairly objective vision, I find when you compare 2, 3, 4 software programs in a given field.

There is another aspect that we liked very much, which is that the market was already at the time, now it is inter-ministerial, so it is on another scale, but already at the beginning, in 2005, it was already interdirectional, that is to say that all the departments of the Ministry of Finance benefited from it. So in general, the principals do not have the same agendas at all. So when we were doing a study, we, for a need, the others did not have this need, but a year later, they finally found the same need. They could go back to these studies, look at the weighting effect, which is an extremely interesting tool because it allows us to contextualize the QSOS study. And so, the other departments which, depending on their life cycle and interest in this or that free software, could therefore replay this method, bring it out, adapt it to their context and obtain in their context, the best solution. And that’s an even greater interest today when the market is shared with all the ministries.

Thierry Aimé

These are studies that we do monthly, they are technical or strategic. We put at the starting point a requester who will express a need, who will define criteria, etc. Within the framework of this contract, a study is carried out, a QSOS study is carried out on the basis of the need, obviously, initially presented, but which can be broadened a little since when we make the frameworks for our studies, we are open to all the ministries. So all the ministries can share and complete the functional grid, knowing that if I am only interested in certain aspects of this functional grid, I weight the others at zero. So it’s very simple to adapt the result of the QSOS study to my context. And so, once this study has been carried out, it can continue to live, since it can always be brought out and possibly adapted to the new context, perhaps, where a choice is necessary, perhaps a little different from the initial choice.

Walid Nouh: Are you the one who does the studies in-house?

Thierry Aimé: So, these are studies that are carried out in the context of the market. So, I mentioned at the beginning the free software support contract, within the framework of this support contract, there is a monitoring study service that is carried out every month on subjects that are chosen by the administration and which are therefore framed together and carried out by the service provider, the one holding the contract, and returned to all the ministries that can participate. It is a videoconference restitution now and where everyone can participate, can participate and when it comes to technical studies or a study or in any case a QSOS grid has been useful. Because we sometimes do studies where we unfortunately can’t implement QSOS grids, it’s not always the ideal solution, so in any case, when it’s presented in the form of a QSOS study, we have the famous diagrams that allow us to compare, to visualize the advantages and disadvantages. For us, one element that is absolutely essential is the durability of the solutions, their stability, their reliability, indeed, according to all the generic criteria that prevail in all the studies and which are an extremely important element in our choices.

Walid Nouh: Very good. Benjamin, for you, actually. What do you use as a method? Have you developed anything internally? How do you go about evaluating this software in the context of the missions you have or the monitoring you do?

Benjamin Jean: So, I thought about that at the same time.

We are quite agnostic in the sense that we will use just about everything that exists to meet the needs of the missions we are in. We are no longer on a tailor-made basis. So, not in a pejorative way, it could sometimes. As an example, I mentioned earlier the valuation evaluation. And in fact, the important thing is to contextualize the value for the organization. I am a research center. What do I look for when I want to open source or reuse open source? What are my criteria? The same if I am the European Commission, the same if I am a local authority.

Benjamin Jean

And from this initial work on value, we then draw elements to measure and we will find what can be automated, what is not and which can be done through surveys or other methods of this type. So most of the time, we will really do this work with the actor for whom we are assigned, to adapt, using all the tools that exist, an evaluation grid that is the most relevant to meet these challenges. And inevitably, bricks are reused from project to project, but no two cases have ever been the same. I take the example of this research centre, the CNES. You will be able to see, there is all the work we have done, published on the Internet. You will find him. The European Commission, we were more focused on open source software that is critical to the Commission. Same, you find each time, it’s… The results are not the same. However, we rely heavily on all the methods that exist. Then, the other context that I mentioned earlier, which was more the calls for commons, or at least calls for projects that are closer to these commons logics. Here, generally, we intervene more on what makes the project a commons. We are less focused on business expertise, which is often left by the commissioning actor or the actor who finances. And the prism that we generally try to take is in the commons, we often talk about the triptych with resources on the one hand, communities on the other, and then the rules of resource governance. And so it’s to do this analysis of the three axes to understand on the one hand what the resources are and their reliability, to understand what the communities are and also to see the sustainability of the project behind the community. And the same goes for thinking about the organization of the actors among themselves and with others. to see to what extent it is something that is sustainable in the long term or not. This is more limited but it is also close to what was mentioned earlier by NLnet to really ensure that the project has an interest in financing this project in particular.

Walid Nouh: Indeed, all this is interesting because we see that depending on what we look at, at what point in the chain of development of the life of the product, we evaluate in the same way. One of the questions I wanted to ask Raphaël is this QSOS method that was developed 20 years ago, it is certainly still where it is in fact? Where is she now?

Raphaël Semeteys: We have to differentiate the method from the project and in particular from the initial project to say we wanted to create community and collaborative technology watch, the method it still exists it is still used today In the project, to facilitate the work, the development of the method and especially the sharing of information, we had started to develop tools. And about ten years ago, I always said to myself: “If I applied QSOS to the QSOS project, I think that in terms of maturity and sustainability, I wouldn’t be great”. Because it’s really linked to me, to Atos. So afterwards, there are people who have left, there are other people who are leaving QSOS, etc. But sustainability is not guaranteed. When I left Atos, it was a bit confirmed. There is a bit of an orphanage that has opened up there. But for all that, the method itself is still valid. It’s no longer the tools that are completely depreciated, obsolete given the technologies. But we’re working on a reboot of QSOS. So that’s good. And here, this time, we really want to do it in the most community way possible. So now, what we’re doing is we’re preparing the technical bases to be able to have something to boost the project, and then try to federate a community on it. So if you’re interested in this topic, follow us, contact me, it’s going to be available on GitHub soon. Here we’re going to talk about the coding and development aspect of the tools, but the objective is really to collaborate and contribute to evaluations, that’s clear. And very quickly they will ask themselves questions, we hope. Today, we have never really had to put in place governance rules and everything, because we have not been confronted, we did not want to build a gas plant before being confronted with the problem. When there are several people who want to correct the evaluations, or modify the grids, how do we update the evaluations on versions of the grids that are more recent, that kind of thing, and all that, it’s going to be very, very interesting. So come and join us, very quickly we’ll try to relaunch this. There you go, so I don’t know if that answers the question.

Trends that will impact evaluation

Walid Nouh: That answers the question. If we now talk a little bit about the future as a conclusion, how do you see a little bit, what are a little, I was going to say, the trends that make us need to evaluate or that will change the way you evaluate things? In what is happening at the moment or in what you suspect is going to happen, what will call into question the way things are evaluated? Lwenn, do you want to tell us a little bit about it? It’s a very broad subject, we could talk about it for a very long time.

Lwenn Bussière: Yes, it’s a pretty difficult subject too. So, phew… This is a difficult subject.

Walid Nouh: Take one or two, take a point or two that get in mind, it’ll be fine.

Lwenn Bussière: For us, at NLnet, we have had a lot of challenges in terms of scale, since the number of projects that apply is around a thousand per year and we are a team of four to evaluate. So we’ve really had some concerns in terms of scale to be able to continue to have the quality and attention to detail of every project we receive, which is not something we want to sacrifice, but we’re still trying to find a compromise that works with the new scale we’re working on.

More broadly, there was a lot of talk before this round table about the issues of the ARC that are beginning to change the dynamics of evaluation.

Lwenn Bussière

I think Benjamin is very concerned, so I’ll probably pass you the hot potato.

Walid Nouh: The Cyber Resilience Act.

Lwenn Bussière: on our side, this is something that is close to our hearts, but which was already close to our hearts in the sense that, when we evaluate projects, we ask questions from the beginning about architecture, good practices, dependencies, the choice of languages which can impact many qualities, in terms of security, accessibility, so we try from the beginning to be able to discuss these things but I think that we are all impacted by the CRA and the way it will change our evaluation methods and impact the aspects we are looking for in the different projects.

Walid Nouh: Nenjamin, on your side?

Benjamin Jean: So the teaser for the CRA is that tomorrow morning, you have a presentation on a guide that we produced for the CNLL on the Cyber Resilience Act, which really concerns the European cybersecurity regulations associated with projects that are put on the European market. So I invite you to come to the conference tomorrow morning and the guide will be published at that time as well. We really tried to go as far as possible, especially in terms of the effects it can have on open source players, with this particular prism of open source. And that was the rather easy answer. Otherwise, for your initial question, what I also see is…

In the case of the projects that we have had to audit, often what we have realized is that there was a dependency via APIs on many services that were provided by other providers. And I think that in the audit of projects, in the evaluation of projects, even open source projects, it’s important to take into account this dependence of a project on other projects. Because in fact, we are not just in a static vision of the scope of the code. That’s a point that is interesting to keep in mind.

Benjamin Jean

And a challenge perhaps for the years to come is quality. Open source can be either a big block or a very, very fine grain. We work a lot on very, very fine grains. And there, it can only be automated. And to automate the evaluation of these very, very fine grains, of all the libraries that you use in your organization, you have to have metadata that is concrete, that is super good. And there, there is still a lot of work to be done on the quality of the metadata and then the way it can be processed. We have methods to process them, but we don’t have all the metadata we’d like to have.

Walid Nouh: Raphaël, your side? What do you see as developments that will impact your method, your way of evaluating?

Raphaël Semeteys: Just to react to what you were saying about metadata, it’s clear. 20 years ago, when I created QSOS, there was no SPDX, for example. Typically, that’s something we’ll base on that. That’s an important point. Afterwards, on what will change, on the way of evaluating, this is what has changed in IT. For me, there are two big things that change. There’s the cloud and AI. Why the cloud? It’s with all the reactions that there have been community projects that have said to themselves “Oh yes, but we have to change our licenses to react to cloud users who sell our project as a service and who don’t contribute”. So we see that there are people who have gone beyond the definition of open source because of this. I’m thinking of Mongo, there are plenty of them, Elastic, Terraform and tomorrow there will be others. So there are new licenses that are emerging. We can also associate this with other types of licenses where there are notions of Code of Conduct or ethics that have been included in the projects. There are ethical licenses that are emerging. That’s the first point. So, it’s on license terms. What does Open Source mean? But still, we still have to evaluate things. And AI, why? And here, I’m going to do it quickly. AI, because it can be used in the evaluation part based on things that are formal and clearly, objectively evaluated, to do something in natural language. So I see, it’s using AI to go further in the evaluation.

And then there is the real question of evaluating AI, evaluating what Open Source AI means. So here, I refer to this morning’s discussion and the other round table which was more animated than ours. We’re cool. There were sparks there. And I refer to the prez that I will do tomorrow, by the way, on this subject, on what it means Open Source, at least for an industrialist like me, and what it means to be open when we talk about AI. So that’s going to change things and we can see that we’re going beyond the notion of code. You were talking about APIs, we’re going to talk about datasets, we’re going to talk about algorithms. Anyway, there are these kinds of questions that arise. But I am silent.

Raphaël Semeteys

Walid Nouh: Thierry, to finish.

Thierry Aimé: Yes, well, we’re really end users. So I don’t think things have evolved too much, except perhaps for the issue of AI, which, indeed, is not really taken into account today in the QSOS method.

As it stands, the method provides us with the service we expect from it and therefore from this point of view, it must continue to exist. I’m happy to hear that a reboot is being prepared. In any case, we will be very happy to contribute with our studies since our monitoring studies carried out over the past 4 years are systematically made available in open source on the Adulact forge. So the URL is gitlab.adulact.net/marche-sll : Free Software Support, SLL. And so on this page, you will see published all the studies carried out over the past 4 years.

Thierry Aimé

And so in our publication action, we will not fail to publish on the new sharing body QSOS monitoring studies to participate in the collective effort without any problem since it is already open source anyway. So, not all studies contain QSOS studies, but in any case, every time they do, we will do so. There you go.

Walid Nouh: Perfect, it’s 5:45 p.m., we managed to hold on for 45 minutes. A few words in conclusion, this is a very broad subject. There, we didn’t talk much, we didn’t go into much detail. I refer you to the podcast Projets Libres if you want to know more about it. And we’re going to start a series specifically called How to evaluate free software? in which we’re going to go and see actors who have these evaluation needs and we’re going to ask them what you’re evaluating, why, how, what your criteria are, etc. So now, it’s going to start in January with Raphaël. We’re going to start talking to people and doing episodes about it. Thank you all. The conference was filmed and it will also be available on the podcast with a transcript. We hope that everything will be ok for the people who couldn’t be there. Thank you all. Good night.

Raphaël Semeteys: thank you.

This conference was recorded during the Open Source Experience exhibition in Paris on December 5, 2024.

License

This lecture is licensed under CC BY-SA 4.0 or later.

Leave a Comment