Back to News & Resources overview
News 18 October 2022

"There is a need to change the mindset about trust and the democratization of knowledge" one of the conclusions of the 5th EERAdata workshop

For the first time in a hybrid format, EERAdata organised its fifth workshop in Brussels and online to discuss business models, licensing and certification for FAIR and open low-carbon energy data. 


Day I – 3rd of October, 2022

On the first day, the EERA transversal Joint Programme Digitalisation for Energy (DfE) joined the EERAdata workshop and brought its perspective on the value of energy data and data services through the lenses of various speakers leading different initiatives linked to the Joint Programme. There were two sessions on the first day of the workshop, including a plenary session and a session dedicated to connecting the EERAdata use case material solutions for low carbon energy with HPC and the data services needed.

Session 1: Plenary Session

The speakers and their presentation topics of the plenary session were:

  • Exascale, a great opportunity for Clean Energy Transition in Europe - Rafael Mayo-García (CIEMAT)
  • HPC services and SaaS, a way for exploiting data - Edouard Audit (EoCoE)
  • Energy Transition Models, a Center of Excellence - Pieter Vingerhoets (VITO)
  • Sensor data collection and secure data transfer in Hydropower - Johanna Schmid (VRVIS)

In the presentations during the plenary session, several cross-cutting activities that should be reinforced in the energy sector were identified. Digitalization was frequently emphasized as it changes the pathway toward research and innovation. One of the most tangible results of the presentations is that High-performance Computing (HPC) is the key enabler in improving coordination at the European Union (EU) level. Pertaining to the examples of HPC as a key enabler, several examples were provided in the fields of wind (e.g. HPC for Wind Energy with Alya), hydro (e.g. Pan-European Hydrological Modeling using ParFlow code with GPUs), and materials (e.g. HPC materials design for Solid State Barriers). HPC is undergoing a significant change as the next generation of Exascale system pose numerous challenges, such as closing to a 100-fold reduction in energy consumption, computing models capable of using several tens of millions of computing elements, and heterogeneous computing nodes with deep memory hierarchy. Hence, the exascale transition will require radical innovation in computing technology, both hardware and software.

As a critical perspective, it was argued that data were becoming a major concern in HPC in many ways:

  • The actual I/O – memory bandwidths do not scale as fast as the computational capabilities
  • All recent supercomputers have a converged architecture capable of dealing with HPC, AI and data-intensive workload
  • New data-based scientific practices – generalization of AI
  • Massive development of captors, treatment of experimental data, sometimes in conjunction with numerical simulation

The speakers also listed specific scientific challenges concerning different energy and technologies in the data field. These include:

  • Wind for Energy
    • Optimization of turbine placements to maximize power output and reduce wind turbine maintenance
    • Help the European wind industry to be more competitive by reducing the cost of wind energy.
  • Meteorology for Energy
    • Enhanced resilience of the power market to variability and extreme events
    • Application to actual power management systems for selected site locations
  • Materials for Energy
    • Increase in performance and extension of the lifetime of organic and silicon solar cells
    • Ascertain the best electrode/electrolyte combination which optimizes the electricity production in electrochemical systems
  • Water for Energy
    • High-resolution and reliable long-term predictions for the efficient management of hydropower plants and the optimal configuration of geothermal plants.
  • Fusion for Energy
    • Contribute to the success of ITER and shorten the time-to-market of a commercial fusion power plant, possibly at a lower cost.

Besides scientific challenges, several technical challenges were also discussed:

  • Programming models: HPC performance, scalability, code architecture and parallelism issues to prepare selected applications for the Exascale ecosystem
  • Scalable solvers: Design and implement exascale-enabled linear algebra solvers for SC applications and integrate them into the SC application codes.
  • Data flow: Provide I/O and data-flow support to SC taking care of resiliency, performance, data size and accessibility.
  • Ensemble Runs: Develop an exascale-ready framework for ensemble runs and in-situ data analysis to empower the Weather and Hydrology applications enabling them to take benefit of next-generation exascale machines.

Based on the discussion, it was argued that each technical challenge tackles an important bottleneck for exascale application. All technical challenges are closely linked to scientific challenges; however, the tools/methodology developed have an impact much beyond the flagship codes.

For example, EoCoE introduced the SaaS Portal based on a single access point to validated applications for renewable energy. In addition, the SaaL portal showcases EoCoE applications to attract new users and foster collaborations, especially with industries.

In the following presentations, another significant perspective was considered regarding what the modelling research community can do to facilitate an accelerated decrease in fossil fuel dependency in the context of repowering the EU. A significant output of the discussion was that there was a lack of European scenario to put national results into context and a lack of detail of national scenarios in European models. Moreover, there is no open and available data, although there are numerous modelling data. The inconsistent European scenario analysis and uncertainties in the investment further exacerbate the situation. It was underlined that research institutes do not have any clear idea regarding how to manage this situation, while the universities and academicians were also not interested in these issues.

Last but not least, sensor data collection and secure data transfer in hydropower were discussed in the DIGI-Hydro research project. The participants debated the sensor measurements, a combination of model-based simulations with data-based approaches, and predictive maintenance, as well as challenges in data collection, including data transferred in junks and lack of live monitoring. Significant challenges pertaining to the DIGI-Hydro research project were a lack of visualization tool available to view the data, the necessity to split data into smaller junks, and resampling for data display.

The conclusions of the presentations in the plenary session mostly imply that Europe is facing a major energy crisis and challenges, a new decentralized energy sector must be redesigned, and the new scheme to come will be only efficient if digitalization is present as a major actor. Furthermore, HPC and FAIR data are the cornerstones to achieving such an objective. Within this framework, the necessary steps are that coordinated investments at the European level ought to be made available, and expertise must be consolidated in the energy scientific community at the EU level. In this sense, trans-disciplinary collaborations will be the right source of expertise for improved scientific advancement and make a reality for the European energy objectives.

Another significant conclusion was that HPC could be a crucial asset for many energy applications, and energy could be one of the incentives to develop large HPC systems. In this sense, large and growing synergies between HPC and data were vital.

Moreover, numerous applications of HPC/Data analytics in the energy sector were exemplified over the material, grid management, and energy policies.

Session 2: HPC & data services needed

The second session on HPC and data services needed connects to the EERAdata use case "material solutions for low-carbon energy". Massimo Celino from Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA) acted as the Moderator of the second session.

The speakers and their presentation topics of the session on HPC and data services needed were:

  • IEMAP - Italian Energy Materials Acceleration Platform - Massimo Celino (ENEA)
  • Material Science Acceleration Platforms - Holger Ihssen (Helmholtz)
  • Materials for energy transition, HPC and digitalization use to accelerate low-carbon energy technology solutions - David Lacroix (Univ Lorraine & AMPEA JP)
  • Digitalization in support of nuclear materials research - Marjorie Bertolus (CEA & NM JP, online)
  • The value of provenance, reproducibility and FAIR sharing in materials science: AiiDA and Materials Cloud - Giovanni Pizzi (EPFL, MARVEL & PSI, online)
  • Fair-by-design approach toward materials at Area Science Park - Stefano Cozzini (Area Science Park, online)

In the presentation given during the second session, materials for the energy platform IEAMP were first introduced. The Italian Energy Materials Acceleration Platform (IEMAP) Operational Plan aims to establish a nationally distributed platform for the accelerated design of advanced materials for energy. The IEMAP platform comprises a network of ENEA, CNR, ITT and RSE laboratories that share data and instrumentation for designing materials for electrochemical storage, electrolyzers and photovoltaics. For materials science activities, the target is to research new materials with specific properties in the field of batteries. The primary methodology is based on building an autonomous system that combines computational procedures with artificial intelligence to speed up scientific discovery. In the following minutes, the participants were introduced to an open-source Python infrastructure, AiiDA, to help researchers automate, manage, persist, share and reproduce the complex workflows associated with modern computational science and all associated data. Hence, AiiDA is built to support and streamline the four core pillars of the ADES model, namely, Automation, Data, Environment, and Sharing. Other significant topics also included machine learning and crystal graph convolutional neural networks (CGCNN).

Another prominent topic of this session was the Innovative Materials Platform which integrates and systematizes equipment and expertise pertaining to the Large Laboratories at the Area Science Park, dedicated to studying and developing advanced surfaces and materials. In this sense, a set of principles and practices that foster openness throughout the entire research life cycle were mentioned, including:

  • Provocation: explore or mine open research resources and use open tools to network with colleagues.
  • Ideation: develop and revise research plans and prepare to share research results and tools under FAIR principles.
  • Knowledge generation: collect data, conduct research using tools compatible with open sharing, and use automated workflow tools to ensure accessibility of research outputs.
  • Validation: prepare data and tools for reproducibility and reuse and participate in replication studies.
  • Dissemination: use appropriate licenses for sharing research outputs and report all results and supporting information (data, code, articles, etc.).
  • Preservation: deposit research outputs in FAIR archives and ensures long-term access to research results.

For the sake of FAIRification, a set of actions were identified such as:

  • Training of a young, dedicated team able to develop advanced data services to be implemented in the labs to collect "fair-by-design" data.
  • Design and implement a FAIR-by-design acquisition pipeline to collect scientific data automatically annotated on selected labs.

Regarding the materials for energy transition, HPC and digitalization use to accelerate low-carbon energy technology solutions, an example was provided over AMPEA that coordinates and promotes multidisciplinary joint research in fundamental materials science and processes for energy. Accordingly, it was concluded that fast improvement of HPC allows accurately defining material properties, and the combination of HPC/database/AI allows for tailoring and optimizing materials properties. Furthermore, database constructions with a common standard are mandatory to foster innovation. The focus should also be more on modelling the same measured data instead of modelling data that can explain experiments. However, specific questions were also raised regarding how the quality control of the data is in the databases and which kind of data and metadata are available to ensure the reliability of simulation and process control.

In the second session, another interesting topic was the data on nuclear materials. In this sense, it was argued that data on nuclear materials in operation is difficult and expensive. There are various stages, including fabrication of samples, irradiation in power or materials testing reactors in off-normal conditions, and detailed characterization of fresh and irradiated materials. It was seen that too few data are available for nuclear materials, and there are two main consequences of this situation. Accordingly, there is a need to complement it using modelling and simulation and need to capitalize all the data obtained from different sources. Hence, the use of digitalization is key to meeting these needs. As examples, results of recent projects were shared, such as Horizon 2020 projects, namely, Multiscale modelling for fusion and fission materials, Localization of deformation in Ferritic/Martensitic steels, GEMMA (Generation IV Materials Maturity), INSPYRE (Investigations Supporting MOX Fuel Licensing in ESNII Prototype Reactors).

Day II – 4th of October, 2022

On the second day of the workshop, the attention was shifted to understanding how we move forward from the current state of affairs after realizing the key importance of FAIR and open energy data and how to derive value from it. The day was kicked off by two keynotes, one on licensing and another on certification. The workshop continued with a panel on sustainable business models for FAIR and open energy data and how to mine the value of data in this sector. Finally, the last session of the day dived into the role of FAIR and open data in the energy policy domain.

Opening Remarks

Prof. Dr. Mehmet Efe Biresselioğlu from IUE SENLAB and Prof. Dr. Valeria Jana Schwanitz from HVL welcomed and gave the opening remarks. Data challenges in the energy sector and the value of open energy data were presented. Accordingly, the data challenges are:

  • Data silos hinder interoperability within and among businesses
  • Data and metadata are not stored together, negatively impacting data findability and reusability
  • Data governance lock-ins due to the use of proprietary software and data formats
  • Lack of ubiquitous access to sensor-based real-time data
  • Lack of standardization and standards hidden behind paywalls
  • Lack of energy data markets

On the other hand, the value of open energy data is discussed as follows:

  • Added value from sharing the data
  • Saving cost from sharing the data
  • The sales price of the shared data in the market
  • Prices from comparable data
  • Costs to reproduce or replace the shared data

Keynote Speeches

Keynote Speech: Robbie Morrison: "Data licensing and open science with a particular focus on energy policy making"

First, regarding data licensing and open science, the presentation given by Robbie Morrison covered non-personal data that is of public interest and can be or has been legitimately published. In this direction, the issues raised during the presentation included material under statutory reporting, noting that most mandates seek to assist system security or remedy market failure rather than advance sustainability and collaborative projects leveraging citizen science such as OpenStreetMap and Wikidata.

During the session, the meaning of “openness” and “open data” was discussed, and the neglected data standards were mentioned. Accordingly, a set of research trends in Europe were introduced. Energy systems researchers in Europe:

  • often work in increasingly legally risk-averse environments
  • are adopting open science doctrines, including strict reproducibility
  • increasingly develop and use custom software
  • are recognizing the benefits of collaborative development for both software and data
  • are widely reliant on what the European Commission describes as “privately-held information of public interest”
  • those developing open-source frameworks are desperate for genuinely open and semantically consistent data
  • those working with open models are progressively venturing into the previously closed world of public policy analysis
  • some modelling teams are now working with researchers from the global south

Afterwards, a number of representative community projects in the energy sector centred on data management and increasingly looking toward linked open data were introduced within the context of Europe and the United States. These included Open Energy Platform (OEP), Open Power System Data (OPSD), and PowerGenome.

Based on the discussion, the ultimate goal of the presentation regarding data licensing and open science was to create a knowledge commons comprising entirely usable and reusable data, community curation of canonical data, consensus semantics, underpinning technical standards that are free, necessarily reliant on distributed architectures and linked open data (LOD) methods. In addition, open Data Directive 2019/1024, Database Directive 96/9/EC and data reuse and wider policy settings were also negotiated for the legal basis. In this sense, it was argued that there was no reason why open analysis could not contribute to public policy formation. This has begun, but there is little to no official recognition. Indeed, the European

Commission identifies commons-based peer production as a megatrend; however, it fails to embrace the concept of public policy formation.

Keynote Speech: Dr. Ingrid Dillo, Deputy Director - DANS: "FAIR and TRUST: The perfect mix"

Dr. Ingrid Dillo introduced the background of DANS, the Dutch national centre of expertise and repository for research data, and its relation with keeping data FAIR. During the presentation, a number of key insights were taken. First, confusion regarding the open and fair data was discussed since these are not parallel with each other all the time. Second, it was noted that open and fair data are well-known among policymakers, funders, and data service providers while less known among researchers. Third, the motivators to FAIRify the data include clear policies and support for compliance. At the same time, the barriers are mainly based on the time and efforts required for RDM and data sharing (i.e. academic recognition), data protection, and legal restrictions. Furthermore, challenges regarding the FAIR metrics and assessment were also questioned over different assessment tools, choices, implementations, and scores. Finally, for FAIRification, it was concluded that skilled people, transparent processes, interoperable technologies and collaboration are needed to build, operate and maintain research data infrastructures.

In addition to FAIR principles, TRUST principles were introduced, including transparency, responsibility, user focus, sustainability, and technology:

  • Transparency: To be transparent about specific repository services and data holdings that are verifiable by publicly accessible evidence.
  • Responsibility: To be responsible for ensuring the authenticity and integrity of data holdings and for the reliability and persistence of its service.
  • User Focus: To ensure that the data management norms and expectations of target user communities are met.
  • Sustainability: To sustain services and preserve data holdings for the long term.
  • Technology: To provide infrastructure and capabilities to support secure, persistent, and reliable services.

In general, the takeaway message of the presentation regarding FAIR and TRUST principles was that we need to share our data to turn open science into a reality. The FAIR principles help define high-quality and transparent research data management practices. The TRUST principles and CoreTrustSeal certification help us to trust the research data infrastructure we need to safeguard the accessibility and accessibility of our (FAIR) data for the future.

Panel Session 1: Sustainable business models for FAIR and open energy data - How to create and mine the value in data?

Following the keynote speeches, two panel discussions were conducted on the second day of the workshop. The first panel discussion was about sustainable business models for FAIR and open energy data. Accordingly, the main research question was how to create and mine the value in data.

The first panel session was moderated by Prof. Dr. Mehmet Efe Biresselioğlu, Principle Investigator in IUE SENLAB. The speakers of the panel session on sustainable business models for FAIR and open energy data were:

  • Dr. Sırrı Uyanık, CEO - ISKEN Sugözü Power Plant - Online participation
  • Dr. Jens Olgard Dalseth Røyrvik, Senior Researcher - NTNU Social Research - Online Participation
  • Hasan Özkoç, Director - Mediterranean Energy Regulators (MEDREG) - Online participation
  • Prof. Dr. Uğur Soytaş, Head of the Climate Economics and Risk Management Section - DTU - Online participation
  • Prof. Dr. Muhittin Hakan Demir, Senior Researcher - IUE SENLAB

The panel kicked off with a discussion pertaining to the value of data mining. The panelists shared their experiences from other research projects and field experiments. One of the most interesting aspects noted during the panel was that several huge databases were utilized in previous Horizon projects; however, these databases were not FAIR despite being open. This was regarded as a barrier to the contextualization and practicability of the data. All panelists agreed that data is highly significant in all sectors, no matter academia, private sector, or policy domain, and the utilization of the data should be fair even in different legal systems and countries. In this sense, eligibility and accessibility of the data were emphasized by the panelists.

From the industry perspective, the experts pointed to two current crises, including climate and energy security crises which have necessitated the process of data more delicately. From the researchers' perspective, knowledge, social acceptance and trust were related to the importance of data. Regarding the competitive advantages of FAIR and open data, the experts emphasized its practicality, including its contribution to timing, proper infrastructure and coding. On the other hand, they noted challenges with FAIR data, such as security concerns of consumers, policymakers and other stakeholders. In this sense, the experts stated that unregulated data and its misuse could lead to energy insecurity and manipulation. In addition, creating common taxonomy and language as well as the interdisciplinary domain were mentioned as other challenges that FAIR and open data encountered. It was also discussed that institutions or industries generally did not know how to monitor and regulate data and how to increase data efficiency.

The panel continued with discussions on the trustworthiness of current and floating data that were used for academic purposes. It was remarked that double-checking data and data resources were significant elements for increasing data reliability. Regarding the practical importance of FAIR data, the experts pointed out that data was a baseline of plans of individuals, industry and institutions that determined their preferences and strategies. Hence, the experts stressed the collaboration between public and private sectors via consultations, meetings and workshops. In this way, they noted that data could be more exchangeable and lead to good practices and governance for different stakeholders.

Panel Session 2: Utilizing FAIR and open data in the energy policy domain

The second panel was conducted to enlighten how to utilize FAIR and open data in the energy policy domain. The panel was moderated by Alexandra Zgorska and Mariusz Kruczek from GIG Research Institute. The speakers of the panel session on the relevance of FAIR data management, usage and sharing in policy-making processes were:

  • Stanisław Tokarski - Member of the Governing Board of the Polish Electricity Committee and the Board of Directors of VGB, member of KIC InnoEnergy, president of the Polish Power Plants Association and member of the Board of the Economic Chamber of Energy and Environmental Protection
  • Michał Jabłoński - Polish Power Plants Association, Deputy Director of Environmental Affairs

The second panel was launched with a brief introduction to the role of energy in the EU's priorities. These priorities were listed as decarbonizing the economy and reducing CO2 emissions, diversifying Europe's energy sources, including reducing dependence on energy imports, integration and free movement of energy within the EU. Based on these priorities, the energy efficiency policy cycle was also touched upon on the basis of utilities, end users, businesses, IT companies, data providers, and ESCOs. Finally, the moderators explained the reasons why we need data in energy policy, as follows:

  • To respond to the need for energy statistics
  • To support each country’s Clean Energy Transitions
  • To ensure well-informed political decision-making by detailed data on energy end-use
  • To respond to the need for detailed energy data and related activity data
  • To learn from each other, share tools and new ideas, developing new approaches
  • To design, monitor and evaluate policies

During the moderators’ presentation, a survey conducted in 9 countries (i.e. Poland, Germany, Italy, Spain, Belgium, France, Czech Republic, Slovenia, and Greece) was introduced to assess the respondents’ knowledge of FAIRdata and their involvement in data FAIRification process. The conclusions of the survey demonstrate that:

  • All of the individual attributes defining FAIR data were indicated as very or essential factors.
  • The financial issues are not a problem in managing, processing and data sharing - as one of the barriers, the lack of rules regulating the flow of data and ensuring its protection was indicated.
  • The incentive system used by policymakers to encourage the adoption and use of open and standardized data when drafting new data policies is insufficient.
  • The use of standards for shared data is not sufficiently promoted and required.
  • Machine readability is not sufficiently promoted and required by policymakers.
  • The most important benefit of using FAIR data was access to reliable information sources.
  • The primary barrier is a lack of knowledge; consequently, if something is unknown in the community, it cannot be implemented correctly.
  • “Lack of understanding of the FAIR data concept (1), difficult data analytics due to different data formats (2) and different understanding of data (3)” were identified as important barriers/impediments to applying good data management practices of FAIR.
  • The results prove the important role played by international projects, meetings and workshops in transferring knowledge of FAIRdata and promoting attitudes and actions conducive to the FAIRification process.
  • Reliable and accessible data are required and used not only at the stage of policy making but throughout the whole monitoring process, evaluating the progress of their implementation. More reliable and accessible data allows constant monitoring of the processes/activities/actions, catching deviations, and taking quick remedial actions.

During the panel session, a significant topic was the data acquisition from European power plants and experiences from the RECPP project. The general objective of the RECPP project was to examine the challenges and opportunities related to the re-purposing potential of the coal power plants and their infrastructure. In this sense, several data acquisition challenges were presented, including:

  • Preparation of a questionnaire sheet that would be possible to fill in by all recipients (standardized) and simultaneously obtain data specific to a given localization/power plant.
  • Confidential/sensitive data, requiring the consent of the management board, the need to disclose data indicating the operator
  • Errors in filling in despite the aforementioned standardization - the data had to be carefully checked
  • Data reliability
  • Partially completed forms
  • Language barrier - technical vocabulary, differences in legal regulations

Based on the discussions, the concluding insights were that data-collecting processes are essential to gather an accurate database, and understanding the definition of required data is crucial for fulfilling the questionnaire. Besides, collected data should be checked (systematic errors) before the automation of their usage. Partial data should be accepted; that is not possible to have access to all data, and experts' opinions help to eliminate structural errors.

The panel on utilizing FAIR and open data in the energy policy domain continued with Michał Jabłoński's presentation regarding the data collection in the energy sector with a view to Large Combustion Plant (LCPs). Accordingly, the participants discussed the reliability and trustworthiness of the data. In this sense, it was concluded that data at every stage of the decision-making process need to be cross-checked and verified. Even raw data may be misleading without full context and background.

Day 3 – 5th of October, 2022

On the third day of the workshop, the focus was shifted to understanding the challenges and opportunities as per FAIR and open data derived from implementing H2020 projects to derive learnings that could be directly applied to upcoming European projects. The day was kicked off by the round table discussion within which perspectives from H2020 projects on FAIR and open data were discussed under the moderation of Prof. Dr. Mehmet Efe Biresselioğlu from IUE SENLAB. The day closed with an internal workshop moderated by Prof. Dr. Valeria Jana Schwanitz from HVL reserved for the EERAdata consortium members to discuss the main takeaways and draw the final steps towards the project's conclusion. Finally, the workshop ended with the closing remarks of Prof. Dr. Mehmet Efe Biresselioğlu and Prof. Dr. Valeria Jana Schwanitz.

Round Table Discussion: Perspectives from H2020 projects on FAIR and open data

The round table discussion was moderated by Prof. Dr. Mehmet Efe Biresselioğlu from IUE SENLAB and consisted of the following speakers:

  • Prof. Dr. Christian Klöckner, Project Coordinator of ENCHANT, SMARTEES and ECHOES - NTNU
  • Dr. Andrea Kollmann, Project Coordinator of DIALOGUES - JKU Energieinstitut (Online participation)
  • Prof. Dr. August Hubert Wierling, COMETS and EERADATA Projects - HVL
  • Dr. Alexandra Lex-Balducci, StoRIES Project – KIT

The round table started with a discussion on the value of data and the speakers' previous experiences with data in their other projects. The speakers pointed out the importance of data in their previous projects, which were based on qualitative interviews and large surveys. It was noted that the question was to make the data open and reusable and anonymize data in the projects. The usable dataset was a format used for statistics and qualitative and quantitative analysis. It was discussed that the anonymization of qualitative data was the generally problematic stage of the projects.

Furthermore, it was noted that numerous projects included audio files of the interview and their translations to English. Hence, it was often difficult to manage and store such big data. The discussion in the round table continued with the necessity of the principle of FAIR principles and data management for the projects. Accordingly, it was noted that project FAIR principles were significant to adopt during the proposal writing phase. In this sense, it was agreed that project writers' knowledge, will and skills and budget were essential features for data management and FAIR principles. As another significant issue, it was argued that the role of researchers was important to assist municipalities and other actors in dealing with the enormous amount of data. In addition, the speakers focused on trust issues as an important problem in their previous projects. Accordingly, it was remarked that data holders sought to keep their data confidential as much as possible to avoid losing the data's competitive advantage, which created problems in sharing the data. As another problem for sharing data of the projects, it was discussed that sharing data and making them available for outsiders required extra time and effort for the project stakeholders.

The round table continued discussing data management plans and adopting FAIR principles in the speakers' previous projects. It was noted that the FAIR principle was not only about sharing data with outsiders but also about how to store it inside. Hence, it was discussed that it was necessary to prioritize FAIR and open principles from the beginning of the consortium plan and the project proposal writing stage. It was concluded that GDPR and data protection were remarkable components of their data management plan. Furthermore, the speakers emphasized that FAIR principles were prominent in their data management plan. Nevertheless, there was a consensus on the necessity of changing the mindset about trust and the democratization of knowledge.

Internal Workshop for EERAdata team

Following the round table discussion in the first part, the programme continued with the internal workshop for the EERAdata team. The internal workshop was carried out under the moderation of Prof. Dr. Valeria Jana Schwanitz (HVL) and was based on the discussions on implications for the final project phase, including policy briefs, use of case deliverables, and EERAdata platform development. Furthermore, the date and the place of the next meeting were discussed. In addition to the practicalities of the project's final phase, Manfred Paier and his team shared their experiences on the EERA platform, including data search and the FAIR data toolbox. Following each participant's evaluation of the workshop and its contributions to FAIR and open data, the three-day workshop ended with the closing remarks of Prof. Dr. Mehmet Efe Biresselioğlu (IUE SENLAB) & Prof. Dr. Valeria Jana Schwanitz (HVL).