top of page

34 results found with an empty search

  • Bioplatforms Australia's Landscape Analysis of AI - generative AI's impact on biology research infrastructure

    The rapid developments and availability of Artificial Intelligence (AI) technology, alongside associated growth in computational resources and data, have stimulated the widespread adoption of machine learning-based methodologies across research, industry, and society. What is generative AI's impact on biology research infrastructure?Understanding the global AI-based technological change within the biosciences is critical to the Australian community. As one of the NCRIS-funded National Research Infrastructures, Bioplatforms Australia sought to understand the impact of the #generativeAI era on the #omics / molecular life sciences community. They engaged us to consult with their stakeholders and ultimately provide preliminary advice and recommendations on a framework investment. Specifically, our task was to prepare BPA for future investment, test preliminary stakeholder ideas, and establish principles for prioritising high-impact initiatives. For example, #AlphaFold has captured attention and been integral to changing beliefs - we validate that and explore where the next AlphaFolds will come from and what role Australia plays in creating and using them. We shared some preliminary findings in September last year. See the news piece: AI driving sentiment change: biology has changed from hypothesis-oriented to engineering-oriented . Earlier this year, we circulated a public report to the BPA stakeholders: BPA landscape analysis of AI & AI-era strategy - consultation & strategy summary . This summary aims to disseminate some key findings and, in turn, empower the community to progress with well-considered AI-era agendas. The report suggests a vision that BPA and its stakeholders could adopt for the AI era (the next decade). This vision builds on the sentiment change and ties in the key points: industry is out investing academia, and biology is a small data discipline: The generative AI era will decipher the language of biology. The globe is embarking on a grand challenge - to “decipher the language of biology”, not dissimilar to when the discipline embarked on discovering the human genome. It is paramount that Australian research and industry participate and yield sustained value from this. Summary report of the BPA landscape analysis of AI. This summary includes extracts from two of the seven parts of the report to BPA. Click through to access. Within the AI era, the research / industrial complex will claim to have solved the central dogma of biology. We'll know how to model and predict DNA, RNA, and protein interactions. With that, biology truly becomes a predictive discipline. AI enables us to #tokenize (make words out of) DNA, RNA, and proteins based on our ever-increasing and gigantic source of omics data and our ability to align that data with broader health/phenotype data. As the key partner in facilities generating omics data, BPA is ideally placed to drive ecosystem change. Researchers and innovators will then use these words to create sentences to convey a message - perform some specific biological function. The race is on to discover these languages and use them to accelerate the process of drug discovery, personalised medicine, and better understanding of disease. Moreover, this is not just a one-off technology purchase; it's about tying your rate of discovery/invention to, at worst, Moore's Law and, at best, the 1000x speedup improvements seen in AI. If you believe in this imminent disruption, what are the capabilities, and who are the partners you need to get there? When observing the landscape from this perspective, another facet becomes clear. The AI era unlocks investment from those who are not proficient in biology. For decades, investors have understood how to measure outcomes from an engineering process (process power / scale economies / etc.) and used that knowledge to invest in markets. Another key consideration is that the research and industrial community hold small data. The outputs from omics instrumentation are small compared to global knowledge of the biological system. The key to success is safely integrating your small data with global knowledge. This is the clearly tricky part, but when one uses an LLM, this is essentially what one does — we continuously integrate small data with global knowledge. We must create a culture for continuous integration and hence create the factories to do so. We engaged 231 researchers, research enablers (capability providers to researchers) and those formerly in the research sector. 110 directly engaged, of which 64 led to direct contributions we held 3 events, incubated 14 ideas and engaged 5 big techs Some of the challenges to wide-scale adoption are: Understanding AI-era business models and opportunities that give rise to researcher-led ventures The paradigm and skill to succeed in the AI era are different the traditional (medical) research ways AI infrastructure is (presently) scarce, and the software is evolving quickly - access to pilots can be cheap but at the expense of margin at scale The culture, technology, processes and governance of data access for AI are likely outdated - appropriate access will need to happen much faster and at a greater scale Sustaining society’s trust whilst also maximising your AI opportunity Technology & continuous improvement is yet another unfunded indirect cost in the research sector! Hence, the three near to middle-term recommendations actions are: entrain to catch up seed the missing expertise underpin generative AI omics platforms BioCommons, BPA's digital arm, already does (3) for the pre-AI era. It is clearly in the driver's seat to discover and operate the same for the AI era. BPA's partnership arm already coordinates the capture and reuse of data and is similarly placed to drive AI-era value creation through data. To quote Jenson (CEO of NVIDIA, perhaps a decade ago): if you don't build it they won't come! The only question for everyone is - do you have the evidence base to believe, and do you access to the capabilities to get there!?!

  • Mastering the AI era - perhaps the process is the same as mastering the Iliad

    #AI, #cyber, #data, and #sustainability— it can feel that everything you hear about them is unintelligible. It can feel that there is no way to master them or capture their essence. The problem is not new - we experience the same thought journey about something or another. For example, I had always wanted to read or, more precisely, understand the story and lessons of Homer's Iliad. It has influenced and entertained so many leaders over thousands of years. But it is 24 books and some 15,000 lines, written thousands of years ago. I can't even read the short-form messages my kids send me without Google. Understanding the raw text of the Iliad would be laborious, and the layers of interpretation by everyone else between then and now would mean I have to accept bias. I have no idea how to measure a good bias from a bad one! I don't know where to start. But disruption can still occur to a nearly three-thousand-year-old artifact. As the English Literature Teacher (Adrian D'Ambra) puts it... "Can I get to the Iliad story rather than the allegory?" There has been an innovation, not in the original text but in the translation, specifically Emily Watson's 2023 translation of the Iliad. It enables new accessibility to the raw material, upon which the English Literature Teacher's writings provide another layer of expertise. Adrian recasts the story as a succinct account of what happens, essentially unencumbered with interpretation, and then separately describes observations and topics raised. Brilliant, I have a place to start and achieve the goal of understanding the essence and the bias. The Iliad is a story written during a technology-driven disruption, confusingly intertwining norms from the bronze and iron ages. Is the confusion we feel today as we transition to the AI era similar? The English Literature Teacher (Adrian D'Ambra) are writings on literature, not technology. They provide 24 short accounts, one per book, of the Iliad's "story", as recently translated by Emily Wilson, and keeping contemporary observations aside. There are many parallels to today - the birth of the AI era. The Iliad occurs during significant technological change, the shift from the Bronze Age to the Iron Age, and weaves in norms from both. Perhaps DeepSeek represents a fundamental shift in accessibility, and soon, we all gain access to LLMs in many facets of our daily lives. But the more pertinent question is, who do you trust to take you through the journey of mastering the AI era? How do you measure bias from advice / coaching or the vendors / technology providers / investors themselves? We are helping C-suite leaders without a technology background and technology engineers/sales with little global business background go through this journey. Engaging in but attaining confidence over hype is a core value of our work. We would like to talk to you about it. You may even safely read the Iliad.

  • 2025 - make-or-break for AI?

    Welcome to 2025, and according to many pundits, it is the make-or-break year for #generativeAI. Does DeepSeek suddenly change the story? The total global investment in AI in the first 1-2 years since ChatGPT's release has been smaller than the dot-com and PC booms. Essentially, the "big seven" has made all the investments. But that's changing - plans for vast new data centres are grabbing headlines. That is, until now, when DeepSeek's release of "cheap AI" has markets going wild. How should we look at this? Firstly, let's assume the claims by/about DeepSeek are legitimate. Whether they are is not essential to the thought exercise. Also, we won't cover all the ground to keep this relatively snappy. Here are four quick points: 1. DeepSeeks will happen! Whereas Moore's Law at 40%CAGR categorises the improvements in chips (and hence GPUs for AI in this instance), #algorithms in their boom era are known to improve at 1000%CAGR. We can see this in LLMs: the steps Open AI made between ChatGPT2 and ChatGPT3 the improvements the global community contributed once Meta openly released their model possibly now with DeepSeek's approach A key point is that algorithm booms are relatively short-lived (1-10 years) compared to Moore's Law, which has existed for 5-6 decades. Betting on Moore's Law is a safer long-term investment. But there's a disruption to leverage. The other key point is that AI makers can learn/adapt DeepSeek's approach as it is open source. If it weren't open, there would be too much scepticism to see viral uptake. Because it is open, all AI makers can benefit. At any given year there are algorithms improving significantly more than Moore's Law, and it turns out the generative AI family is one of them, largely because much of the learnings are made open. 2. Increasing pressure to get your IT/AI strategy & culture right You can either use DeepSeek as an online service or, with some IT/computational savviness, you can run it on your personal computer. People report that the online version limits what can be asked, but you gain access to the technology early. The AI era is riddled with these trade-offs between short-term gain and long-term pain. Remember that blocking your brightest minds from winning through technology today does mitigate immediate risk at the cost of a competitor starting the exponential #learningcurve early. If you don't catch the train early enough, the company will lose in the end. 3. Data is king! Efforts like DeepSeek will need to use ChatGPT (etc.) to shortcut the process of acquiring sufficient and appropriate data to learn. People are reporting seeing this in effect. Presumably, existing AI makers will make it harder to use this shortcut. If so, these models are followers and won't sustain being best at anything for very long. Moreover, your data is #scarce! LLMs need your data, making it tangibly valuable. Once your data is in an AI model, you lose control of its monetary value — choose your partners and what goes to them wisely! This extends to the data about you: your location, where you move your mouse, and so on. Nothing is for free / you're ultimately paying for the service in some way, and most likely that is giving them data about you! 4. Jevon's paradox awakens for LLMs Tech companies have to believe that the world needs AI. Your house already has hundreds of microprocessors. Today, even your car door has more than one, even if it's 10-20 years old. We will all have hundreds of AIs doing stuff around us wherever we are. This is the scale of demand needed to invest in making generative AI work. In turn, tech companies have to believe in #Jevon's paradox — as the resource's cost drops, overall demand increases, causing resource consumption to rise. Whether you're a property investor, a cloud operator, an AI maker, an NVIDIA or TSMC, AI getting cheaper, even in big steps, only means more AI for you. 2025 will still see a democratisation of investment in the AI boom, probably most observable in property investment in data centres, buying the energy rights and filling them with GPUs. That is, there'll be investment by the next lot beyond the big seven. However, the enviable DeepSeeks across the globe may enable a plethora of startups to succeed quickly, opening up investment by the many. Moreover, as the lower levels of the stack engage (but still make a lot of money) in a race to the bottom, and just as in medicine where foundational research is shared as a common good, you and I (our modern economies) can engage and compete at higher-levels, targetting consumers, emotions the environment, and so on. 2025 should be good for AI!

  • Steve & the Goannas win the Hockey Masters Over45s World Cup

    Congratulations to our Steve Quenette and the Australian Over45s team for winning the Hockey Masters World Cup in Auckland! The team dominated the competition, scoring 36 goals and conceding only 1 goal, culminating in a 5-1 win over South Africa in the grand final. Steve's assessment was: The coaching and support staff were first-rate, enabling and empowering each player and, ultimately, the team to gel, trust and shine. The team was fitter (and load managed better), more present and wholly aware of our strategies than our opponents. Life-long friendships and memories were made. ... drawing parallels to the executive coaching and personal leadership we deliver. The team was reaming in talent and leadership. Steve's journey was not without challenges, overcoming a week-long cold and then a rolled ankle throughout the two weeks of competition. Yet he walked away with a goal (several misses!) and accolades for front-half defensive pressure. He was also appointed the team's social media coordinator. You can recap the action and shenanigans here on Facebook .

  • Learnings from a landscape analysis of AI/ML infrastructure opportunities and challenges

    It is that time of the year when those #enabling the #digital aspects of #research get together - eResearch Australasia 2024 in Melbourne. It is also timely, as the National Research Digital Infrastructure (NDRI) working group has been surveying the community. This year, our contribution to the conference is putting together a BOF entitled: Learnings from a landscape analysis of AI/ML infrastructure opportunities and challenges . Building on our work with Bioplatforms Australia this year, the BOF expands to explore recent learnings in other research domains. The panel include Amanda Barnard (Deputy Director at ANU's School of Computing), Andrew Gilbert (CEO of Bioplatforms Australia), Tim Rawling (CEO of AuScope), and Kate Mitchie (Chief Scientist at UNSW's Structural Biology Facility). Come join us from 11:25 to 12:45 on Wednesday, 30th October, where we explore how these communities are beginning to create an environment for success in the generative AI era. Abstract AI is increasingly integrated into scientific discovery. It augments and accelerates research, helping scientists generate hypotheses, design experiments, collect and interpret large datasets, and gain insights that might not have been possible using traditional scientific methods alone. Australia is both safety-centric and behind in the adoption of generative AI. This work takes a progressive posture – mapping out a strategy for disruptive, scaled-out, safe and sustainable generative AI participation and adoption by the omics community. As a naturally data-centric enabler of research infrastructure, Bioplatforms Australia (BPA) has embarked on a mission to understand AI's impact on the omics community (genomics, proteomics, metabolomics, and synthetic biology are BPA's focus areas) and the role AI will play in the advanced utility of increasingly integrated laboratory data outputs. It seeks to ensure impact through AI adoption by its partner infrastructure facilities, data framework initiatives, and platform capabilities (BioCommons). We've invited friends from structural biology, geoscience and nanoparticles to contribute their recent learnings. What discoveries will be made because of AI? How and why do partner facilities adopt AI innovation? How are big-tech, pharma, and investment ecosystems changing the roles and opportunities for our research ecosystem? What are the workforce needs? What are the data needs? What do we require from the DRI? Do we need / when do we need an AI factory? What does a re-imagined ecosystem of industry, researchers, and research infrastructure look like? This BoF will briefly share what we have learned from our journey thus far. A panel of selected stakeholders will discuss the nature of the change being faced by infrastructure enablers.

  • AI driving sentiment change: biology has changed from hypothesis-oriented to engineering-oriented

    Innate Innovation is helping Bioplatforms Australia (#BPA) analyse the landscape of artificial intelligence and its potential impact on biomolecular sciences in Australia. Our work earlier this year confirmed a shifting sentiment within the BPA community. After decades of work on first principles and failing to yield accurate predictions of protein structure, the advent of #AlphaFold has shown us that the complex and intricate molecular information of how proteins fold is present in the data. That is, applying large language models to biology can learn and predict how proteins fold. Its advent disrupts fundamental science and industry. Researchers/innovators in early-phase drug discovery can generate and test an order of 10,000 more targets for a similar time and cost, neatly fitting into an existing commercialisation pipeline. The research transitions from first hypothesising a likely target and gathering data to test to choosing the best-performing target from thousands inspired by data. Hence, BPA first sought an answer to the question: is this sentiment true across the broader biomolecular science community? That is... Is #generativeAI the 'aha' moment where the biology discipline changes from hypothesis-oriented to engineering-oriented? Building on some work by Nature (AI and science - what 1,600 researchers think), we consulted researchers and innovators across BPA's #genomics, #proteomics, #metabolomics and #syntheticbiology communities, ranging from professorial academics, early to mid-career academics, professional facility staff, career bioinformaticians, ML & data engineers and computing engineers. The answer was a unanimous yes! A few people think generative AI is a fad, but everyone agrees that the nature of the discipline has changed. The community appreciates machine learning but seeks leadership and a connection to how the generative AI era will disrupt the omics domain. We found: Approximately 40% stated that they do not have the skills, data, or computing capability to attempt generative AI and first seek the skills, Another ~44% stated that they have the skills but not the data (which begs whether they genuinely leverage the generative AI), and Only 16% have sufficient skills and data, where access to computing and tools was the success limitter. How does the generative AI era affect BPA as the national funder of instrument, digital and data infrastructure for the biomolecular sciences? How does this affect the business model of #researchinfrastructure and the research itself? What are the emerging early wins other than AlphaFold? How does Australian omics research and industry remain at the forefront of a decade of generative AI disruption to fundamental science and scaled-out translation? Work is needed to accelerate the stakeholder awareness and adoption of generative AI. Stay posted to find out how we're doing it. If you want to contribute your view/sentiment, you can complete this one-minute survey .

  • Gotta love a good (data) scarcity

    We often get asked to consider, advise, and strategise on how to invest in AI. Increasingly, we’re hearing the AI bubble will burst, but for those who have lived through the last three decades of Australian housing prices, maybe it won’t! Perhaps #data #scarcity will drive long-term growth. #AI democratises the more advanced things computers do (automation). For example, we speak to Large Language Models ( #LLMs) in English, stuff happens, and we can get an answer back in English. To do that, it must appear to understand English and common facts (strictly speaking, it doesn’t). We never taught it English grammar, but one can imagine that if you read 300 billion words, and even with the most straightforward strategies for analysing all those passages, you would notice a pattern - English grammar. The emerging business models (and research methods) that exploit them are ingenious and profound! We won’t go through examples here, but they drive massive investments in computing facilities to train and apply AI. Is that the bubble that busts? Perhaps not. Three hundred billion words is a lot. Is the Internet an endless supply of words? What happens if we run out of words? If and when data becomes scarce, we expect the investment flows to adjust - your data could become the most valuable part of the ecosystem. Training AI requires more data than we have. A recently revised Epoch AI study finds LLMs’ need for data will exceed the available stock of public human text data between 2026 and 2032. That is close! The signals are there - increasingly, we see major AI players signing deals with strategic data partners and publishers. Organisations, innovators, and researchers realise that LLMs affect their long-standing business models and are changing their licenses and access methods to their published data to ensure continued sustainability. Data scarcity will not burst the AI bubble, but it will solidify where the value is for those prepared. How prepared are you for this shift? And how do you make sure you don’t miss out?

  • AI in the workforce - does it take jobs?

    Many organisations come to us asking what AI investment they should make. Some have stated they will not invest as they fear AI taking jobs. When we hear this, we use imagination to reopen the conversation. Our hypothetical starts with, "Imagine it is the 1960s, and everyone in this room is there in the 1960s. We all have our present jobs. And we all have a secretary beside us, punching away at a mechanical keyboard. But more people are employed now than then, and we certainly do not have a secretary". It usually ends with agreement - it is not AI that they fear. There is evidence to guide us. For example, James Besson, a successful entrepreneur and later an academic, studies technology's economic impacts on society. In the period leading into 2015, people were concerned that "automation" was taking jobs. He published a paper examining computer automation's impact on 317 occupations from 1980 through 2013. He found: "Employment grows significantly faster in occupations that use computers more." Last year, he followed up on this work, collecting survey data from 917 startups over five years. These startups produce commercial AI products and, through the carefully constructed survey, provide a glimpse into how their products impact labour across industries. Some key findings: AI (appears to) enhance human capabilities rather than replace humans. AI (appears to) create a shift in work from some occupations to others, meaning that some people will lose jobs, and new jobs will be created. New jobs (appear to) require new skills, requiring workers to make investments and (perhaps) to endure difficult transitions. And many more. In summary, this line of recent and long-term evidence suggests that AI is not and will not reduce jobs. Instead, AI creates efficiencies and increases quality, producing better products/services and driving demand, thus promoting employment growth. They also state: "The evidence tempers concern about mass unemployment or disemployment of professionals. " This is just one example, and yes, there are pros and cons to their methods and assumptions. However, there's some good evidence to invest in AI.

  • Authenticity as a guiding principle

    How big is the data centre energy problem? We've been through a decade or two of a virtualisation-led cloud, effectively sharing physical servers among users. Whenever you get a new email or scroll through social media, you ask a server somewhere to do a tiny piece of work. Leveraging the elasticity of the cloud means you rent that microsecond of use rather than buying a physical machine that sits there powered on but idle. You also get that cloud/data centre's commitment to sustainability. Generative AI flips this dynamic on its head. Training large models requires tens to hundreds of GPUs running full tilt for weeks. As we all engage AI, our collective need consumes vast energy, hence carbon and water relative to our Web-2.0 / old-school cloud use. Given (pre generative AI / pro cloud era) data centres accounted for almost a fifth of all electricity used in the Republic of Ireland in 2022, rising by 400% since 2015 [1], the impact the AI era will have on datacentres is a valid concern. Today, every data centre and cloud has a sustainability, net zero or liquid cooling play. But how do we know which are real? Which has the most significant impact? Which has the greatest promise of attaining sustainability? The issue here is measures of data centre efficiency are globally poor: PUE is imprecise, leaving much to interpretation and hence inconsistencies between claims, and building codes, such as NABERS (an otherwise relevant and excellent code in Australia), use PUE and are yet to catch up with the AI era change Sustainability decision-makers are increasingly conscious of the materiality of claims. Proactive authenticity has an advantage, but sometimes, you must pave the way. To this end, we're incredibly proud of our friends at SMC. A year ago, we celebrated the formation of Sustainable Metal Cloud, a partnership informed in part by our work with Firmus. Setting authenticity as a principle, SMC has validated their pioneering technology and subsequent efficiency standard for AI factories. They are the first to publish the full suite of power results and performance data for MLPerf on clusters as large as 64 nodes (512 GPUs) - the de facto standard for benchmarking AI resources. In their news article, they claim: "This showcases significant energy savings over conventional air-cooled infrastructure, which when combined within our Singapore data centre, has proven to save close to 50% total energy." We knew that 50% — how could SMC authentically prove it? Publishing the full power results with their MLPerf benchmark submission is an excellent way! It's so good to see a regional innovation and partnership coming to fruition and leading the global conversation! Well done SMC! [1] Data centres use almost a fifth of Irish electricity, BBC News (https://www.bbc.com/news/articles/cpe9l5ke5jvo.amp)

  • AI applied to research - what 1,600 researchers think

    We are working with BioPlatforms Australia, an Australian enabler of research infrastructure for the molecular sciences, to understand #AI's impact on the #omics community (BPA's focus areas are #genomics, #proteomics, #metabolomics, and #syntheticbiology). We will soon host a series of events and messages to share what we have learned and to crystallise on near-term strategy. Watch this space. In the meantime, one of the core pieces of work was consulting the community, asking what is the local sentiment and capability to respond to an AI and increasingly generative AI ecosystem? Here's a precursor to our findings. Late last year, Nature published an article entitled AI and science: what 1,600 researchers think. It provides valuable insight from all walks of academia. Some key takeaways mirrored in our findings are: The share of research papers with titles or abstracts that mention AI or machine-learning terms has risen to around 8% Lack of skills is the dominant barrier to using AI An anecdote from the drug discovery community - "Only a very small number of entities on the planet have the capabilities to train the very large models — which require large numbers of GPUs, the ability to run them for months, and to pay the electricity bill. That constraint is limiting science's ability to make these kinds of discoveries", Garrett Morris, University of Oxford More than half of those surveyed felt it important that researchers using AI collaborate with the commercial firms dominating computing resources and tool development. Our specialisation is developing a progressive and impactful evidence base, near-term and long-term AI infrastructure, and enablement strategies. We are uniquely qualified to consult deeply technical and academic stakeholders and to facilitate technology partnerships. Dare to Dream!

  • Coaching comes in all forms - helping over 1000 computer science students grow

    In 2023, we succeeded in developing a sustainability, cloud, and AI line of business. A goal for 2024 is to create a #coaching profile. Yes, we are spreading ourselves thin. However, a key tenant of Innate Innovation is to explore. Besides, we know those who show prowess in technology & science early on in their careers often miss the personal leadership training "corporate types" get throughout their careers. There's an opportunity to help the real innovators shine! We've participated in #technology, #career, #personalleadership, #marketinsight, and #corporategovernance activities with people across the pay grades. We've held workshops, educating and connecting many. It's hard to talk about the individuals we've helped. One thing we can speak about (and Steve has wanted to do for some time) is to go back to his undergraduate alma mater (RMIT) and contribute with small amounts of tutoring. He effectively coaches first years on what the innovating world needs, wants, and values while assisting the understanding of course material and even some marking!

  • Are privacy protections applied to technology platforms enough to enable AI?

    Some recent work and articles have us thinking... Are privacy protections applied to technology platforms enough to enable AI (and the growth of industries from data) without overly weakening the liberties of individuals? Where do we see strong data liberties leading to greater AI potential? The linked article is interesting, as it calls out some shortfalls if we rely on privacy alone. For example: "It transfers the responsibilities for the harms of the data ecosystem to the individuals without giving them the information or tools to enact change." That is, individuals are empowered to control who can use the data at the point of providing the data. We influence how it is shared. However, we are not afforded the same opportunity during data use. And the potential for use is endless. Moreover, platform business models focus on driving more data input through personalisation and attention-grabbing, providing more data for more undetermined future use. Great business model! Except for the degree to which harm is readily accessible and prevalent. Moreover, there may be better ways to attain the scale of data needed to draw value from AI. The article's solution is to establish data cooperatives - entities that hold the data on individuals' behalf and, as an extensive collection of users, they can counter the weight of the platforms. We're not suggesting this is commercially wise for an individual platform or even a desperate societal priority. Instead, all types of organisations face this dynamic. We're asking: "If one begins investing in a strategic AI future, are there other models worth considering?" It is helpful to point out that data collectives are pervasive in the research sector. We've been through a decade or so of building such collectives, where initially, we did not know how the data would be used, nor the ROI of the effort. Data collectives / repositories / registries are emerging as the primary prerequisite by which the research sector applies AI to itself. The resultant datasets are more significant than the hoarded dataset of one researcher, one group and sometimes one discipline. Hence, the ability to coordinate large datasets is increasingly the rate-limiter to discoveries. The lubricant enabling trust and buy-in into large datasets is belief in the governance over the dataset/collective.

bottom of page