29 items found for ""
- AI driving sentiment change: biology has changed from hypothesis-oriented to engineering-oriented
Innate Innovation is helping Bioplatforms Australia (#BPA) analyse the landscape of artificial intelligence and its potential impact on biomolecular sciences in Australia. Our work earlier this year confirmed a shifting sentiment within the BPA community. After decades of work on first principles and failing to yield accurate predictions of protein structure, the advent of #AlphaFold has shown us that the complex and intricate molecular information of how proteins fold is present in the data. That is, applying large language models to biology can learn and predict how proteins fold. Its advent disrupts fundamental science and industry. Researchers/innovators in early-phase drug discovery can generate and test an order of 10,000 more targets for a similar time and cost, neatly fitting into an existing commercialisation pipeline. The research transitions from first hypothesising a likely target and gathering data to test to choosing the best-performing target from thousands inspired by data. Hence, BPA first sought an answer to the question: is this sentiment true across the broader biomolecular science community? That is... Is #generativeAI the 'aha' moment where the biology discipline changes from hypothesis-oriented to engineering-oriented? Building on some work by Nature (AI and science - what 1,600 researchers think), we consulted researchers and innovators across BPA's #genomics, #proteomics, #metabolomics and #syntheticbiology communities, ranging from professorial academics, early to mid-career academics, professional facility staff, career bioinformaticians, ML & data engineers and computing engineers. The answer was a unanimous yes! A few people think generative AI is a fad, but everyone agrees that the nature of the discipline has changed. The community appreciates machine learning but seeks leadership and a connection to how the generative AI era will disrupt the omics domain. We found: Approximately 40% stated that they do not have the skills, data, or computing capability to attempt generative AI and first seek the skills, Another ~44% stated that they have the skills but not the data (which begs whether they genuinely leverage the generative AI), and Only 16% have sufficient skills and data, where access to computing and tools was the success limitter. How does the generative AI era affect BPA as the national funder of instrument, digital and data infrastructure for the biomolecular sciences? How does this affect the business model of #researchinfrastructure and the research itself? What are the emerging early wins other than AlphaFold? How does Australian omics research and industry remain at the forefront of a decade of generative AI disruption to fundamental science and scaled-out translation? Work is needed to accelerate the stakeholder awareness and adoption of generative AI. Stay posted to find out how we're doing it. If you want to contribute your view/sentiment, you can complete this one-minute survey .
- Gotta love a good (data) scarcity
We often get asked to consider, advise, and strategise on how to invest in AI. Increasingly, we’re hearing the AI bubble will burst, but for those who have lived through the last three decades of Australian housing prices, maybe it won’t! Perhaps #data #scarcity will drive long-term growth. #AI democratises the more advanced things computers do (automation). For example, we speak to Large Language Models ( #LLMs) in English, stuff happens, and we can get an answer back in English. To do that, it must appear to understand English and common facts (strictly speaking, it doesn’t). We never taught it English grammar, but one can imagine that if you read 300 billion words, and even with the most straightforward strategies for analysing all those passages, you would notice a pattern - English grammar. The emerging business models (and research methods) that exploit them are ingenious and profound! We won’t go through examples here, but they drive massive investments in computing facilities to train and apply AI. Is that the bubble that busts? Perhaps not. Three hundred billion words is a lot. Is the Internet an endless supply of words? What happens if we run out of words? If and when data becomes scarce, we expect the investment flows to adjust - your data could become the most valuable part of the ecosystem. Training AI requires more data than we have. A recently revised Epoch AI study finds LLMs’ need for data will exceed the available stock of public human text data between 2026 and 2032. That is close! The signals are there - increasingly, we see major AI players signing deals with strategic data partners and publishers. Organisations, innovators, and researchers realise that LLMs affect their long-standing business models and are changing their licenses and access methods to their published data to ensure continued sustainability. Data scarcity will not burst the AI bubble, but it will solidify where the value is for those prepared. How prepared are you for this shift? And how do you make sure you don’t miss out?
- AI in the workforce - does it take jobs?
Many organisations come to us asking what AI investment they should make. Some have stated they will not invest as they fear AI taking jobs. When we hear this, we use imagination to reopen the conversation. Our hypothetical starts with, "Imagine it is the 1960s, and everyone in this room is there in the 1960s. We all have our present jobs. And we all have a secretary beside us, punching away at a mechanical keyboard. But more people are employed now than then, and we certainly do not have a secretary". It usually ends with agreement - it is not AI that they fear. There is evidence to guide us. For example, James Besson, a successful entrepreneur and later an academic, studies technology's economic impacts on society. In the period leading into 2015, people were concerned that "automation" was taking jobs. He published a paper examining computer automation's impact on 317 occupations from 1980 through 2013. He found: "Employment grows significantly faster in occupations that use computers more." Last year, he followed up on this work, collecting survey data from 917 startups over five years. These startups produce commercial AI products and, through the carefully constructed survey, provide a glimpse into how their products impact labour across industries. Some key findings: AI (appears to) enhance human capabilities rather than replace humans. AI (appears to) create a shift in work from some occupations to others, meaning that some people will lose jobs, and new jobs will be created. New jobs (appear to) require new skills, requiring workers to make investments and (perhaps) to endure difficult transitions. And many more. In summary, this line of recent and long-term evidence suggests that AI is not and will not reduce jobs. Instead, AI creates efficiencies and increases quality, producing better products/services and driving demand, thus promoting employment growth. They also state: "The evidence tempers concern about mass unemployment or disemployment of professionals. " This is just one example, and yes, there are pros and cons to their methods and assumptions. However, there's some good evidence to invest in AI.
- Authenticity as a guiding principle
How big is the data centre energy problem? We've been through a decade or two of a virtualisation-led cloud, effectively sharing physical servers among users. Whenever you get a new email or scroll through social media, you ask a server somewhere to do a tiny piece of work. Leveraging the elasticity of the cloud means you rent that microsecond of use rather than buying a physical machine that sits there powered on but idle. You also get that cloud/data centre's commitment to sustainability. Generative AI flips this dynamic on its head. Training large models requires tens to hundreds of GPUs running full tilt for weeks. As we all engage AI, our collective need consumes vast energy, hence carbon and water relative to our Web-2.0 / old-school cloud use. Given (pre generative AI / pro cloud era) data centres accounted for almost a fifth of all electricity used in the Republic of Ireland in 2022, rising by 400% since 2015 [1], the impact the AI era will have on datacentres is a valid concern. Today, every data centre and cloud has a sustainability, net zero or liquid cooling play. But how do we know which are real? Which has the most significant impact? Which has the greatest promise of attaining sustainability? The issue here is measures of data centre efficiency are globally poor: PUE is imprecise, leaving much to interpretation and hence inconsistencies between claims, and building codes, such as NABERS (an otherwise relevant and excellent code in Australia), use PUE and are yet to catch up with the AI era change Sustainability decision-makers are increasingly conscious of the materiality of claims. Proactive authenticity has an advantage, but sometimes, you must pave the way. To this end, we're incredibly proud of our friends at SMC. A year ago, we celebrated the formation of Sustainable Metal Cloud, a partnership informed in part by our work with Firmus. Setting authenticity as a principle, SMC has validated their pioneering technology and subsequent efficiency standard for AI factories. They are the first to publish the full suite of power results and performance data for MLPerf on clusters as large as 64 nodes (512 GPUs) - the de facto standard for benchmarking AI resources. In their news article, they claim: "This showcases significant energy savings over conventional air-cooled infrastructure, which when combined within our Singapore data centre, has proven to save close to 50% total energy." We knew that 50% — how could SMC authentically prove it? Publishing the full power results with their MLPerf benchmark submission is an excellent way! It's so good to see a regional innovation and partnership coming to fruition and leading the global conversation! Well done SMC! [1] Data centres use almost a fifth of Irish electricity, BBC News (https://www.bbc.com/news/articles/cpe9l5ke5jvo.amp)
- AI applied to research - what 1,600 researchers think
We are working with BioPlatforms Australia, an Australian enabler of research infrastructure for the molecular sciences, to understand #AI's impact on the #omics community (BPA's focus areas are #genomics, #proteomics, #metabolomics, and #syntheticbiology). We will soon host a series of events and messages to share what we have learned and to crystallise on near-term strategy. Watch this space. In the meantime, one of the core pieces of work was consulting the community, asking what is the local sentiment and capability to respond to an AI and increasingly generative AI ecosystem? Here's a precursor to our findings. Late last year, Nature published an article entitled AI and science: what 1,600 researchers think. It provides valuable insight from all walks of academia. Some key takeaways mirrored in our findings are: The share of research papers with titles or abstracts that mention AI or machine-learning terms has risen to around 8% Lack of skills is the dominant barrier to using AI An anecdote from the drug discovery community - "Only a very small number of entities on the planet have the capabilities to train the very large models — which require large numbers of GPUs, the ability to run them for months, and to pay the electricity bill. That constraint is limiting science's ability to make these kinds of discoveries", Garrett Morris, University of Oxford More than half of those surveyed felt it important that researchers using AI collaborate with the commercial firms dominating computing resources and tool development. Our specialisation is developing a progressive and impactful evidence base, near-term and long-term AI infrastructure, and enablement strategies. We are uniquely qualified to consult deeply technical and academic stakeholders and to facilitate technology partnerships. Dare to Dream!
- Coaching comes in all forms - helping over 1000 computer science students grow
In 2023, we succeeded in developing a sustainability, cloud, and AI line of business. A goal for 2024 is to create a #coaching profile. Yes, we are spreading ourselves thin. However, a key tenant of Innate Innovation is to explore. Besides, we know those who show prowess in technology & science early on in their careers often miss the personal leadership training "corporate types" get throughout their careers. There's an opportunity to help the real innovators shine! We've participated in #technology, #career, #personalleadership, #marketinsight, and #corporategovernance activities with people across the pay grades. We've held workshops, educating and connecting many. It's hard to talk about the individuals we've helped. One thing we can speak about (and Steve has wanted to do for some time) is to go back to his undergraduate alma mater (RMIT) and contribute with small amounts of tutoring. He effectively coaches first years on what the innovating world needs, wants, and values while assisting the understanding of course material and even some marking!
- Are privacy protections applied to technology platforms enough to enable AI?
Some recent work and articles have us thinking... Are privacy protections applied to technology platforms enough to enable AI (and the growth of industries from data) without overly weakening the liberties of individuals? Where do we see strong data liberties leading to greater AI potential? The linked article is interesting, as it calls out some shortfalls if we rely on privacy alone. For example: "It transfers the responsibilities for the harms of the data ecosystem to the individuals without giving them the information or tools to enact change." That is, individuals are empowered to control who can use the data at the point of providing the data. We influence how it is shared. However, we are not afforded the same opportunity during data use. And the potential for use is endless. Moreover, platform business models focus on driving more data input through personalisation and attention-grabbing, providing more data for more undetermined future use. Great business model! Except for the degree to which harm is readily accessible and prevalent. Moreover, there may be better ways to attain the scale of data needed to draw value from AI. The article's solution is to establish data cooperatives - entities that hold the data on individuals' behalf and, as an extensive collection of users, they can counter the weight of the platforms. We're not suggesting this is commercially wise for an individual platform or even a desperate societal priority. Instead, all types of organisations face this dynamic. We're asking: "If one begins investing in a strategic AI future, are there other models worth considering?" It is helpful to point out that data collectives are pervasive in the research sector. We've been through a decade or so of building such collectives, where initially, we did not know how the data would be used, nor the ROI of the effort. Data collectives / repositories / registries are emerging as the primary prerequisite by which the research sector applies AI to itself. The resultant datasets are more significant than the hoarded dataset of one researcher, one group and sometimes one discipline. Hence, the ability to coordinate large datasets is increasingly the rate-limiter to discoveries. The lubricant enabling trust and buy-in into large datasets is belief in the governance over the dataset/collective.
- We now know - digitisation boosts the demand for physical books
Occasionally, a story comes one way, debunking a commonly held sentiment... Most people presume that digital media, in this example, the Google Books project, will cause the end of physical books - the dematerialisation of literature. Amazingly, a study was recently released analysing the sales of 37,743 books that Google digitised between 2005 and 2009, and they found the project has increased sales of "paper" books by up to 8%! They found that around 40% of digitised titles saw their sales increase between 2003-2004 and 2010-2011. On the other hand, less than 20% of non-digitised titles had increased sales. The idea is that digitisation enables marketing & exposure at a scale inaccessible to the brick-and-mortar paradigm. Let's face it; it has taken 15-20 years to establish the evidence to debunk the sentiment. However, today, businesses face many concerns about the digital world. For example, will AI take my job? Or does my AI consume more carbon than it saves? We're constantly facing asymmetrical risk management decisions, where we know the penalty for a cyber breach today, but we do not know the future value of different options in controls. Hence, this article is a timely reminder. Because the public sentiment on the impact of digital leans one way, there is a real chance future evidence shows the opposite! Easy to read article about it: https://studyfinds.org/books-digitizing-literature-paper/ Paper: Abhishek Nagaraj & Imke Reimers in the American Economic Journal https://www.aeaweb.org/articles?id=10.1257/pol.20210702
- Discussing our report at SC Asia
Our first news item for 2024 is that we are hosting a BoF at SC Asia to discuss the outcomes of last year's Sustainability of AI-scale Digital Research Infrastructure workshop report. Taking feedback from all quarters, we've distilled the needs to 4 targeted workshops: Demand response & efficiencies for compute-intensive DCs Why? Lift your awareness & access to environmental sustainability innovation in data centres, in particular nearer-term solutions, not reliant on major capital building works Dimensionalising an AI DRI ecosystem for NRIs Why? Gain a peer-wide uplift of awareness of AI within NRI communities Underpinning industrial platforms with an AI DRI Why? Awareness of best-practices at providing DRI (data & pipelines) into commercial plays Sustaining an AI DRI ecosystem with institutions Why? Explore commercial sustainability through co-designing with universities ... all seeking the safe & facilitated environment for suppliers and centres, private and public, big and small, to discuss these topics. Hence, we will also use the BoF to explore and prioritise these workshops. Come join us at SC Asia on Thursday Feb 22 in the afternoon. Reach out to engage, shape and secure your spot!
- Introduction to Sustainability and AI mini-documentary
Innate Innovation partook in a mini-documentary cover story investigating the relationship between #AI and #sustainability. Put together by Digital Nation Australia, the documentary responds to the increasing awareness that AI may be for the #ESG good or may come at a cost. How is AI used to help to contribute to ESG goals in organisations? What is AI’s impact on ESG? How do these balance now and into the future? Speaking to the topic of a greener future in the 10minute documentary - we’re hopeful. Yes, on surface value, AI is changing our demands on computers. However, both the old guard and start-ups (see our Firmus Technologies Forge Partnership to Build a Global Network article) are tackling the impact of this change on efficiency and carbon emissions today. In our contribution to the documentary, you’ll hear us explain how the performance of algorithms behind AI is changing 1000-fold, contributing to decarbonisation at the application level. “We’re still in the baby era of AI algorithms, and algorithms when they improve, we’ve seen this over decades and over many of our world’s innovations, they improve a thousand-fold” We help customers map, measure and project in this complicated but critical space - to get your role in “AI can contribute $115 billion to Australia’s economy if implemented correctly” right!
- Report: Sustainability of AI-scale Digital Research Infrastructure
The report from the Sustainability of AI-scale Digital Research Infrastructure workshop @ eResearch Australasia is now available on the workshop's website. Organised by Innate Innovation, the workshop created a safe environment, enabling broad representation from the community to discuss and develop a shared understanding of an AI-centric future. Across all themes, we discussed matters across the hardware, services, platforms and research stack. It ratified the gaps in collective knowledge and provided an example approach for ecosystem-wide co-design of pre-competitive facets. Some key findings: At the data centre level, the Australian DRI ecosystem has sufficient buying power to adopt cooling innovation progressively, An increased consciousness exists to tailor performance/watt and cooling efficiency to local concerns, The critical gap, however, is the need for more attention given to software efficiency, given its dominant role in overall efficiency. With a baseline understanding set, community engagement & confidence in the format, and distinct themes with learnings, we propose a series of workshops to develop the questions and findings further. You can view the presentation version of the summary report online (expandable to full screen) or register to download the print-friendly version. Please get in touch with us for further evidence or details.
- Announcing the sustainability of AI-scale digital research infrastructure workshop
Are you an #AI creator? An AI user? Or a quality data provider to AI? Adapting #LLMs to an organisation or project is compute-intensive, requiring dedicated use of traditionally #HPC-scale facilities. The electricity needed for data centres will explode, let alone the skills gap, which raises significant environmental, governance and cost questions. Are we ready for this? Join ARDC, BOM, CSIRO, NCI and Pawsey as we educate and explore what it means to develop a sovereign AI-scale digital research infrastructure ecosystem. This eResearch Australasia 2023 pre-conference workshop combines four distinct groups of our ecosystem: large research-centric datacentres, emerging sustainability-driven AI technology vendors, research- and industry-driving AI platforms, and research organisations & communities. And with Carmel Walsh's help, Innate Innovation is coordinating this event. Find more information at the workshop's official page.