33 items found for ""
- Mastering the AI era - perhaps the process is the same as mastering the Iliad
#AI, #cyber, #data, and #sustainability— it can feel that everything you hear about them is unintelligible. It can feel that there is no way to master them or capture their essence. The problem is not new - we experience the same thought journey about something or another. For example, I had always wanted to read or, more precisely, understand the story and lessons of Homer's Iliad. It has influenced and entertained so many leaders over thousands of years. But it is 24 books and some 15,000 lines, written thousands of years ago. I can't even read the short-form messages my kids send me without Google. Understanding the raw text of the Iliad would be laborious, and the layers of interpretation by everyone else between then and now would mean I have to accept bias. I have no idea how to measure a good bias from a bad one! I don't know where to start. But disruption can still occur to a nearly three-thousand-year-old artifact. As the English Literature Teacher (Adrian D'Ambra) puts it... "Can I get to the Iliad story rather than the allegory?" There has been an innovation, not in the original text but in the translation, specifically Emily Watson's 2023 translation of the Iliad. It enables new accessibility to the raw material, upon which the English Literature Teacher's writings provide another layer of expertise. Adrian recasts the story as a succinct account of what happens, essentially unencumbered with interpretation, and then separately describes observations and topics raised. Brilliant, I have a place to start and achieve the goal of understanding the essence and the bias. The Iliad is a story written during a technology-driven disruption, confusingly intertwining norms from the bronze and iron ages. Is the confusion we feel today as we transition to the AI era similar? The English Literature Teacher (Adrian D'Ambra) are writings on literature, not technology. They provide 24 short accounts, one per book, of the Iliad's "story", as recently translated by Emily Wilson, and keeping contemporary observations aside. There are many parallels to today - the birth of the AI era. The Iliad occurs during significant technological change, the shift from the Bronze Age to the Iron Age, and weaves in norms from both. Perhaps DeepSeek represents a fundamental shift in accessibility, and soon, we all gain access to LLMs in many facets of our daily lives. But the more pertinent question is, who do you trust to take you through the journey of mastering the AI era? How do you measure bias from advice / coaching or the vendors / technology providers / investors themselves? We are helping C-suite leaders without a technology background and technology engineers/sales with little global business background go through this journey. Engaging in but attaining confidence over hype is a core value of our work. We would like to talk to you about it. You may even safely read the Iliad.
- 2025 - make-or-break for AI?
Welcome to 2025, and according to many pundits, it is the make-or-break year for #generativeAI. Does DeepSeek suddenly change the story? The total global investment in AI in the first 1-2 years since ChatGPT's release has been smaller than the dot-com and PC booms. Essentially, the "big seven" has made all the investments. But that's changing - plans for vast new data centres are grabbing headlines. That is, until now, when DeepSeek's release of "cheap AI" has markets going wild. How should we look at this? Firstly, let's assume the claims by/about DeepSeek are legitimate. Whether they are is not essential to the thought exercise. Also, we won't cover all the ground to keep this relatively snappy. Here are four quick points: 1. DeepSeeks will happen! Whereas Moore's Law at 40%CAGR categorises the improvements in chips (and hence GPUs for AI in this instance), #algorithms in their boom era are known to improve at 1000%CAGR. We can see this in LLMs: the steps Open AI made between ChatGPT2 and ChatGPT3 the improvements the global community contributed once Meta openly released their model possibly now with DeepSeek's approach A key point is that algorithm booms are relatively short-lived (1-10 years) compared to Moore's Law, which has existed for 5-6 decades. Betting on Moore's Law is a safer long-term investment. But there's a disruption to leverage. The other key point is that AI makers can learn/adapt DeepSeek's approach as it is open source. If it weren't open, there would be too much scepticism to see viral uptake. Because it is open, all AI makers can benefit. At any given year there are algorithms improving significantly more than Moore's Law, and it turns out the generative AI family is one of them, largely because much of the learnings are made open. 2. Increasing pressure to get your IT/AI strategy & culture right You can either use DeepSeek as an online service or, with some IT/computational savviness, you can run it on your personal computer. People report that the online version limits what can be asked, but you gain access to the technology early. The AI era is riddled with these trade-offs between short-term gain and long-term pain. Remember that blocking your brightest minds from winning through technology today does mitigate immediate risk at the cost of a competitor starting the exponential #learningcurve early. If you don't catch the train early enough, the company will lose in the end. 3. Data is king! Efforts like DeepSeek will need to use ChatGPT (etc.) to shortcut the process of acquiring sufficient and appropriate data to learn. People are reporting seeing this in effect. Presumably, existing AI makers will make it harder to use this shortcut. If so, these models are followers and won't sustain being best at anything for very long. Moreover, your data is #scarce! LLMs need your data, making it tangibly valuable. Once your data is in an AI model, you lose control of its monetary value — choose your partners and what goes to them wisely! This extends to the data about you: your location, where you move your mouse, and so on. Nothing is for free / you're ultimately paying for the service in some way, and most likely that is giving them data about you! 4. Jevon's paradox awakens for LLMs Tech companies have to believe that the world needs AI. Your house already has hundreds of microprocessors. Today, even your car door has more than one, even if it's 10-20 years old. We will all have hundreds of AIs doing stuff around us wherever we are. This is the scale of demand needed to invest in making generative AI work. In turn, tech companies have to believe in #Jevon's paradox — as the resource's cost drops, overall demand increases, causing resource consumption to rise. Whether you're a property investor, a cloud operator, an AI maker, an NVIDIA or TSMC, AI getting cheaper, even in big steps, only means more AI for you. 2025 will still see a democratisation of investment in the AI boom, probably most observable in property investment in data centres, buying the energy rights and filling them with GPUs. That is, there'll be investment by the next lot beyond the big seven. However, the enviable DeepSeeks across the globe may enable a plethora of startups to succeed quickly, opening up investment by the many. Moreover, as the lower levels of the stack engage (but still make a lot of money) in a race to the bottom, and just as in medicine where foundational research is shared as a common good, you and I (our modern economies) can engage and compete at higher-levels, targetting consumers, emotions the environment, and so on. 2025 should be good for AI!
- Steve & the Goannas win the Hockey Masters Over45s World Cup
Congratulations to our Steve Quenette and the Australian Over45s team for winning the Hockey Masters World Cup in Auckland! The team dominated the competition, scoring 36 goals and conceding only 1 goal, culminating in a 5-1 win over South Africa in the grand final. Steve's assessment was: The coaching and support staff were first-rate, enabling and empowering each player and, ultimately, the team to gel, trust and shine. The team was fitter (and load managed better), more present and wholly aware of our strategies than our opponents. Life-long friendships and memories were made. ... drawing parallels to the executive coaching and personal leadership we deliver. The team was reaming in talent and leadership. Steve's journey was not without challenges, overcoming a week-long cold and then a rolled ankle throughout the two weeks of competition. Yet he walked away with a goal (several misses!) and accolades for front-half defensive pressure. He was also appointed the team's social media coordinator. You can recap the action and shenanigans here on Facebook .
- Learnings from a landscape analysis of AI/ML infrastructure opportunities and challenges
It is that time of the year when those #enabling the #digital aspects of #research get together - eResearch Australasia 2024 in Melbourne. It is also timely, as the National Research Digital Infrastructure (NDRI) working group has been surveying the community. This year, our contribution to the conference is putting together a BOF entitled: Learnings from a landscape analysis of AI/ML infrastructure opportunities and challenges . Building on our work with Bioplatforms Australia this year, the BOF expands to explore recent learnings in other research domains. The panel include Amanda Barnard (Deputy Director at ANU's School of Computing), Andrew Gilbert (CEO of Bioplatforms Australia), Tim Rawling (CEO of AuScope), and Kate Mitchie (Chief Scientist at UNSW's Structural Biology Facility). Come join us from 11:25 to 12:45 on Wednesday, 30th October, where we explore how these communities are beginning to create an environment for success in the generative AI era. Abstract AI is increasingly integrated into scientific discovery. It augments and accelerates research, helping scientists generate hypotheses, design experiments, collect and interpret large datasets, and gain insights that might not have been possible using traditional scientific methods alone. Australia is both safety-centric and behind in the adoption of generative AI. This work takes a progressive posture – mapping out a strategy for disruptive, scaled-out, safe and sustainable generative AI participation and adoption by the omics community. As a naturally data-centric enabler of research infrastructure, Bioplatforms Australia (BPA) has embarked on a mission to understand AI's impact on the omics community (genomics, proteomics, metabolomics, and synthetic biology are BPA's focus areas) and the role AI will play in the advanced utility of increasingly integrated laboratory data outputs. It seeks to ensure impact through AI adoption by its partner infrastructure facilities, data framework initiatives, and platform capabilities (BioCommons). We've invited friends from structural biology, geoscience and nanoparticles to contribute their recent learnings. What discoveries will be made because of AI? How and why do partner facilities adopt AI innovation? How are big-tech, pharma, and investment ecosystems changing the roles and opportunities for our research ecosystem? What are the workforce needs? What are the data needs? What do we require from the DRI? Do we need / when do we need an AI factory? What does a re-imagined ecosystem of industry, researchers, and research infrastructure look like? This BoF will briefly share what we have learned from our journey thus far. A panel of selected stakeholders will discuss the nature of the change being faced by infrastructure enablers.
- AIÂ driving sentiment change: biology has changed from hypothesis-oriented to engineering-oriented
Innate Innovation is helping Bioplatforms Australia (#BPA) analyse the landscape of artificial intelligence and its potential impact on biomolecular sciences in Australia. Our work earlier this year confirmed a shifting sentiment within the BPA community. After decades of work on first principles and failing to yield accurate predictions of protein structure, the advent of #AlphaFold has shown us that the complex and intricate molecular information of how proteins fold is present in the data. That is, applying large language models to biology can learn and predict how proteins fold. Its advent disrupts fundamental science and industry. Researchers/innovators in early-phase drug discovery can generate and test an order of 10,000 more targets for a similar time and cost, neatly fitting into an existing commercialisation pipeline. The research transitions from first hypothesising a likely target and gathering data to test to choosing the best-performing target from thousands inspired by data. Hence, BPA first sought an answer to the question: is this sentiment true across the broader biomolecular science community? That is... Is #generativeAI the 'aha' moment where the biology discipline changes from hypothesis-oriented to engineering-oriented? Building on some work by Nature (AI and science - what 1,600 researchers think), we consulted researchers and innovators across BPA's #genomics, #proteomics, #metabolomics and #syntheticbiology communities, ranging from professorial academics, early to mid-career academics, professional facility staff, career bioinformaticians, ML & data engineers and computing engineers. The answer was a unanimous yes! A few people think generative AI is a fad, but everyone agrees that the nature of the discipline has changed. The community appreciates machine learning but seeks leadership and a connection to how the generative AI era will disrupt the omics domain. We found: Approximately 40% stated that they do not have the skills, data, or computing capability to attempt generative AI and first seek the skills, Another ~44% stated that they have the skills but not the data (which begs whether they genuinely leverage the generative AI), and Only 16% have sufficient skills and data, where access to computing and tools was the success limitter. How does the generative AI era affect BPA as the national funder of instrument, digital and data infrastructure for the biomolecular sciences? How does this affect the business model of #researchinfrastructure and the research itself? What are the emerging early wins other than AlphaFold? How does Australian omics research and industry remain at the forefront of a decade of generative AI disruption to fundamental science and scaled-out translation? Work is needed to accelerate the stakeholder awareness and adoption of generative AI. Stay posted to find out how we're doing it. If you want to contribute your view/sentiment, you can complete this one-minute survey .
- Gotta love a good (data) scarcity
We often get asked to consider, advise, and strategise on how to invest in AI. Increasingly, we’re hearing the AI bubble will burst, but for those who have lived through the last three decades of Australian housing prices, maybe it won’t! Perhaps #data #scarcity will drive long-term growth. #AI democratises the more advanced things computers do (automation). For example, we speak to Large Language Models ( #LLMs) in English, stuff happens, and we can get an answer back in English. To do that, it must appear to understand English and common facts (strictly speaking, it doesn’t). We never taught it English grammar, but one can imagine that if you read 300 billion words, and even with the most straightforward strategies for analysing all those passages, you would notice a pattern - English grammar. The emerging business models (and research methods) that exploit them are ingenious and profound! We won’t go through examples here, but they drive massive investments in computing facilities to train and apply AI. Is that the bubble that busts? Perhaps not. Three hundred billion words is a lot. Is the Internet an endless supply of words? What happens if we run out of words? If and when data becomes scarce, we expect the investment flows to adjust - your data could become the most valuable part of the ecosystem. Training AI requires more data than we have. A recently revised Epoch AI study finds LLMs’ need for data will exceed the available stock of public human text data between 2026 and 2032. That is close! The signals are there - increasingly, we see major AI players signing deals with strategic data partners and publishers. Organisations, innovators, and researchers realise that LLMs affect their long-standing business models and are changing their licenses and access methods to their published data to ensure continued sustainability. Data scarcity will not burst the AI bubble, but it will solidify where the value is for those prepared. How prepared are you for this shift? And how do you make sure you don’t miss out?
- AI in the workforce - does it take jobs?
Many organisations come to us asking what AI investment they should make. Some have stated they will not invest as they fear AI taking jobs. When we hear this, we use imagination to reopen the conversation. Our hypothetical starts with, "Imagine it is the 1960s, and everyone in this room is there in the 1960s. We all have our present jobs. And we all have a secretary beside us, punching away at a mechanical keyboard. But more people are employed now than then, and we certainly do not have a secretary". It usually ends with agreement - it is not AI that they fear. There is evidence to guide us. For example, James Besson, a successful entrepreneur and later an academic, studies technology's economic impacts on society. In the period leading into 2015, people were concerned that "automation" was taking jobs. He published a paper examining computer automation's impact on 317 occupations from 1980 through 2013. He found: "Employment grows significantly faster in occupations that use computers more." Last year, he followed up on this work, collecting survey data from 917 startups over five years. These startups produce commercial AI products and, through the carefully constructed survey, provide a glimpse into how their products impact labour across industries. Some key findings: AI (appears to) enhance human capabilities rather than replace humans. AI (appears to) create a shift in work from some occupations to others, meaning that some people will lose jobs, and new jobs will be created. New jobs (appear to) require new skills, requiring workers to make investments and (perhaps) to endure difficult transitions. And many more. In summary, this line of recent and long-term evidence suggests that AI is not and will not reduce jobs. Instead, AI creates efficiencies and increases quality, producing better products/services and driving demand, thus promoting employment growth. They also state: "The evidence tempers concern about mass unemployment or disemployment of professionals. " This is just one example, and yes, there are pros and cons to their methods and assumptions. However, there's some good evidence to invest in AI.
- Authenticity as a guiding principle
How big is the data centre energy problem? We've been through a decade or two of a virtualisation-led cloud, effectively sharing physical servers among users. Whenever you get a new email or scroll through social media, you ask a server somewhere to do a tiny piece of work. Leveraging the elasticity of the cloud means you rent that microsecond of use rather than buying a physical machine that sits there powered on but idle. You also get that cloud/data centre's commitment to sustainability. Generative AI flips this dynamic on its head. Training large models requires tens to hundreds of GPUs running full tilt for weeks. As we all engage AI, our collective need consumes vast energy, hence carbon and water relative to our Web-2.0 / old-school cloud use. Given (pre generative AI / pro cloud era) data centres accounted for almost a fifth of all electricity used in the Republic of Ireland in 2022, rising by 400% since 2015 [1], the impact the AI era will have on datacentres is a valid concern. Today, every data centre and cloud has a sustainability, net zero or liquid cooling play. But how do we know which are real? Which has the most significant impact? Which has the greatest promise of attaining sustainability? The issue here is measures of data centre efficiency are globally poor: PUE is imprecise, leaving much to interpretation and hence inconsistencies between claims, and building codes, such as NABERS (an otherwise relevant and excellent code in Australia), use PUE and are yet to catch up with the AI era change Sustainability decision-makers are increasingly conscious of the materiality of claims. Proactive authenticity has an advantage, but sometimes, you must pave the way. To this end, we're incredibly proud of our friends at SMC. A year ago, we celebrated the formation of Sustainable Metal Cloud, a partnership informed in part by our work with Firmus. Setting authenticity as a principle, SMC has validated their pioneering technology and subsequent efficiency standard for AI factories. They are the first to publish the full suite of power results and performance data for MLPerf on clusters as large as 64 nodes (512 GPUs) - the de facto standard for benchmarking AI resources. In their news article, they claim: "This showcases significant energy savings over conventional air-cooled infrastructure, which when combined within our Singapore data centre, has proven to save close to 50% total energy." We knew that 50% — how could SMC authentically prove it? Publishing the full power results with their MLPerf benchmark submission is an excellent way! It's so good to see a regional innovation and partnership coming to fruition and leading the global conversation! Well done SMC! [1] Data centres use almost a fifth of Irish electricity, BBC News (https://www.bbc.com/news/articles/cpe9l5ke5jvo.amp)
- AI applied to research - what 1,600 researchers think
We are working with BioPlatforms Australia, an Australian enabler of research infrastructure for the molecular sciences, to understand #AI's impact on the #omics community (BPA's focus areas are #genomics, #proteomics, #metabolomics, and #syntheticbiology). We will soon host a series of events and messages to share what we have learned and to crystallise on near-term strategy. Watch this space. In the meantime, one of the core pieces of work was consulting the community, asking what is the local sentiment and capability to respond to an AI and increasingly generative AI ecosystem? Here's a precursor to our findings. Late last year, Nature published an article entitled AI and science: what 1,600 researchers think. It provides valuable insight from all walks of academia. Some key takeaways mirrored in our findings are: The share of research papers with titles or abstracts that mention AI or machine-learning terms has risen to around 8% Lack of skills is the dominant barrier to using AI An anecdote from the drug discovery community - "Only a very small number of entities on the planet have the capabilities to train the very large models — which require large numbers of GPUs, the ability to run them for months, and to pay the electricity bill. That constraint is limiting science's ability to make these kinds of discoveries", Garrett Morris, University of Oxford More than half of those surveyed felt it important that researchers using AI collaborate with the commercial firms dominating computing resources and tool development. Our specialisation is developing a progressive and impactful evidence base, near-term and long-term AI infrastructure, and enablement strategies. We are uniquely qualified to consult deeply technical and academic stakeholders and to facilitate technology partnerships. Dare to Dream!
- Coaching comes in all forms - helping over 1000 computer science students grow
In 2023, we succeeded in developing a sustainability, cloud, and AI line of business. A goal for 2024 is to create a #coaching profile. Yes, we are spreading ourselves thin. However, a key tenant of Innate Innovation is to explore. Besides, we know those who show prowess in technology & science early on in their careers often miss the personal leadership training "corporate types" get throughout their careers. There's an opportunity to help the real innovators shine! We've participated in #technology, #career, #personalleadership, #marketinsight, and #corporategovernance activities with people across the pay grades. We've held workshops, educating and connecting many. It's hard to talk about the individuals we've helped. One thing we can speak about (and Steve has wanted to do for some time) is to go back to his undergraduate alma mater (RMIT) and contribute with small amounts of tutoring. He effectively coaches first years on what the innovating world needs, wants, and values while assisting the understanding of course material and even some marking!
- Are privacy protections applied to technology platforms enough to enable AI?
Some recent work and articles have us thinking... Are privacy protections applied to technology platforms enough to enable AI (and the growth of industries from data) without overly weakening the liberties of individuals? Where do we see strong data liberties leading to greater AI potential? The linked article is interesting, as it calls out some shortfalls if we rely on privacy alone. For example: "It transfers the responsibilities for the harms of the data ecosystem to the individuals without giving them the information or tools to enact change." That is, individuals are empowered to control who can use the data at the point of providing the data. We influence how it is shared. However, we are not afforded the same opportunity during data use. And the potential for use is endless. Moreover, platform business models focus on driving more data input through personalisation and attention-grabbing, providing more data for more undetermined future use. Great business model! Except for the degree to which harm is readily accessible and prevalent. Moreover, there may be better ways to attain the scale of data needed to draw value from AI. The article's solution is to establish data cooperatives - entities that hold the data on individuals' behalf and, as an extensive collection of users, they can counter the weight of the platforms. We're not suggesting this is commercially wise for an individual platform or even a desperate societal priority. Instead, all types of organisations face this dynamic. We're asking: "If one begins investing in a strategic AI future, are there other models worth considering?" It is helpful to point out that data collectives are pervasive in the research sector. We've been through a decade or so of building such collectives, where initially, we did not know how the data would be used, nor the ROI of the effort. Data collectives / repositories / registries are emerging as the primary prerequisite by which the research sector applies AI to itself. The resultant datasets are more significant than the hoarded dataset of one researcher, one group and sometimes one discipline. Hence, the ability to coordinate large datasets is increasingly the rate-limiter to discoveries. The lubricant enabling trust and buy-in into large datasets is belief in the governance over the dataset/collective.
- We now know - digitisation boosts the demand for physical books
Occasionally, a story comes one way, debunking a commonly held sentiment... Most people presume that digital media, in this example, the Google Books project, will cause the end of physical books - the dematerialisation of literature. Amazingly, a study was recently released analysing the sales of 37,743 books that Google digitised between 2005 and 2009, and they found the project has increased sales of "paper" books by up to 8%! They found that around 40% of digitised titles saw their sales increase between 2003-2004 and 2010-2011. On the other hand, less than 20% of non-digitised titles had increased sales. The idea is that digitisation enables marketing & exposure at a scale inaccessible to the brick-and-mortar paradigm. Let's face it; it has taken 15-20 years to establish the evidence to debunk the sentiment. However, today, businesses face many concerns about the digital world. For example, will AI take my job? Or does my AI consume more carbon than it saves? We're constantly facing asymmetrical risk management decisions, where we know the penalty for a cyber breach today, but we do not know the future value of different options in controls. Hence, this article is a timely reminder. Because the public sentiment on the impact of digital leans one way, there is a real chance future evidence shows the opposite! Easy to read article about it: https://studyfinds.org/books-digitizing-literature-paper/ Paper: Abhishek Nagaraj & Imke Reimers in the American Economic Journal https://www.aeaweb.org/articles?id=10.1257/pol.20210702