Close Menu
    Trending
    • Creating AI that matters | MIT News
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    • Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI
    • ChatGPT Gets More Personal. Is Society Ready for It?
    • Why the Future Is Human + Machine
    • Why AI Is Widening the Gap Between Top Talent and Everyone Else
    • Implementing the Fourier Transform Numerically in Python: A Step-by-Step Guide
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Why Science Must Embrace Co-Creation with Generative AI to Break Current Research Barriers
    Artificial Intelligence

    Why Science Must Embrace Co-Creation with Generative AI to Break Current Research Barriers

    ProfitlyAIBy ProfitlyAIAugust 25, 2025No Comments24 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    on a twin perspective of a former pharmaceutical chemistry researcher and present AI developer, to discover what generative AI (GenAI) may deliver to scientific analysis. Whereas giant language fashions (LLMs) are already remodeling how builders write code and remedy issues, their use in science stays restricted to primary duties. But, LLMs also can contribute to the reasoning course of. Primarily based on latest analysis and sensible examples, the article argues that GenAI can act as a considering accomplice in science and allow true human-AI co-creation, serving to researchers make higher strategic selections, discover new views, achieve instantaneous professional insights, and in the end speed up discovery.

    GenAI in science: The unexploited alternative

    Over the previous twenty years, I’ve lived by means of two very completely different careers. I started in tutorial analysis, beginning with a PhD in natural chemistry, then progressively transferring into pharmaceutical sciences, exploring components of biology and even physics. Like many scientists, I adopted analysis initiatives wherever they led, usually throughout disciplines. At the moment, I work as an AI resolution architect, serving to firms combine AI into their workflows. This twin expertise provides me a singular perspective on what GenAI may deliver to science.

    At first, science and coding could seem very completely different. However they don’t seem to be. Whether or not synthesizing a molecule or constructing a program, each are about assembling items into one thing coherent and purposeful. In each fields, it usually appears like taking part in with Lego bricks.

    What surprises me most at present is how little GenAI is utilized in science. In just some years and a handful of mannequin generations, language fashions have already remodeled tremendously how we write, program, and design. Builders broadly use AI-assisted instruments like GitHub Copilot or Cursor, often to generate small items of code, fastidiously reviewed and built-in into bigger initiatives. This has already boosted productiveness by several-fold. A few of us go additional, utilizing GenAI as a artistic accomplice. Not only for pace, however to arrange concepts, check assumptions, discover new instructions, and design higher options.

    In distinction, science nonetheless treats language fashions as aspect instruments, largely for summarizing papers or automating routine writing. Information interpretation and reasoning stay largely untouched, as if the mental core of science should keep completely human. To me, this hesitation is a missed alternative.

    After I share this with former colleagues, the identical objection all the time comes: Language fashions typically hallucinate or give unsuitable solutions. Sure, they do. So do people, whoever they’re. We anticipate language fashions to be flawless, whereas we don’t anticipate that from our human friends. A scientist may be unsuitable. Prime-tier research are even typically retracted. That’s the reason crucial considering exists, and it ought to apply to AI as effectively. We should always deal with LLMs the identical means we deal with folks: Hear with curiosity and warning, confirm and refine.

    Current breakthroughs in AI “reasoning” present what is feasible. Microsoft’s MAI-DxO system (Nori H. et al., 2025) exhibits {that a} group of LLM-powered AI brokers, working in a structured means, can outperform conventional medical diagnostics. Outperform. That doesn’t imply medical doctors are out of date, but it surely does present the immense worth of AI suggestions in decision-making.

    That is already taking place, and it adjustments how we’d construction reasoning itself. It’s time to cease treating AI as a elaborate calculator, and begin seeing it as what it could possibly be, an actual considering accomplice.

    Why scientific analysis is beneath strain

    Scientific analysis at present faces monumental challenges: Laboratories around the globe generate huge quantities of information, usually sooner than scientists can course of, even inside their particular area. On high of that, researchers juggle many duties: instructing, supervising college students, making use of for funding, attending conferences and making an attempt to maintain up with the literature. This overload makes it more and more exhausting to search out time and focus.

    Science can also be changing into extra interdisciplinary each day. Fields like pharmaceutical analysis, which I do know effectively, continuously draw from chemistry, biology, physics, computation, and extra. In my final tutorial challenge at ETH Zurich as a senior scientist, I labored with a biologist colleague on the identification of inhibitors of an RNA-protein interplay utilizing a high-throughput FRET assay (Roos, M. ; Pradère, U. et al., 2016). It concerned ideas from physics (mild absorption), natural chemistry (molecule synthesis), biochemistry (testing), and utilizing advanced robotic techniques. Even with our complementary profiles, the duty was overwhelming. Every scientific self-discipline has its personal set of instruments, language, and strategies, and buying this information requires important cognitive effort.

    Analysis can also be costly and time-consuming. Experiments demand substantial human and monetary assets. World competitors, shrinking budgets, and a gradual and in depth peer-review course of add extra strain. Labs should innovate sooner, publish extra, and safe funding continuously. Below these circumstances, sustaining each the standard and originality of analysis isn’t just tough, it’s changing into nearly heroic.

    Any instrument that may assist scientists handle information, assume extra clearly, entry experience throughout disciplines, discover concepts extra freely, or just save time and psychological vitality, deserves severe consideration. GenAI has the potential to be precisely that sort of instrument, one that may assist scientists to push the frontiers of science even additional.

    What co-creation with GenAI can really change

    Embracing GenAI as a co-creator in scientific analysis has the potential to vary not simply how we write, search, or analyze, however how we expect. It’s not about automation, it’s about increasing the cognitive house obtainable to scientists. One of the crucial highly effective advantages of working with language fashions is their means to multiply views immediately. A well-prompted mannequin can deliver collectively concepts from physics, biology, and chemistry in a single response, making connections {that a} scientist, specialist in a single self-discipline, may overlook.

    Current research present that, when correctly guided, general-purpose language fashions can match or outperform domain-specific techniques on sure duties. They show a capability to combine cross-disciplinary data and help specialised reasoning, as proven in latest work on tagging, classification, and speculation technology (LLM4SR, Luo et al., 2025).

    Co-creation additionally permits sooner and clearer speculation technology. As an alternative of analyzing a dataset within the typical means, a scientist can brainstorm with an LLM, discover “what if” questions, and check various instructions in seconds. Managed experiments present that enormous language fashions can generate analysis concepts of comparable high quality to these of human researchers. In lots of instances, consultants even judged them extra novel or promising (Si et al., 2024; Kumar et al., 2024). Whereas these research concentrate on concept technology, they counsel a broader potential: That generative fashions may meaningfully help in early-stage scientific reasoning, from designing experiments to exploring various interpretations or formulating hypotheses.

    Language fashions additionally deliver a type of goal criticism. They haven’t any ego (no less than for now), no institutional or disciplinary bias and don’t defend their very own concepts stubbornly. This makes them ideally suited for taking part in the function of a impartial satan’s advocate or offering recent views that problem our assumptions.

    Equally essential, GenAI can act as a structured reminiscence of reasoning. Each speculation, fork of thought, or discarded path may be documented, revisited, and reused. As an alternative of counting on the unfinished notes of a postdoc that left years in the past, the lab good points a sturdy, collective reminiscence of its mental course of.

    Used this fashion, GenAI turns into excess of a scientific instrument. It turns into a catalyst for broader, deeper, and extra rigorous considering.

    Co-creating with GenAI: Collaboration, not delegation

    Utilizing GenAI in scientific analysis doesn’t imply handing over management. It means working with a system that may assist us assume higher, however not assume for us. Like all lab accomplice, GenAI brings strengths and limitations, and the human stays chargeable for setting the objectives, validating the route, and decoding the outcomes.

    Some fear that language fashions may lead us to rely an excessive amount of on automation. However in apply, they’re best when they’re guided, prompted, and challenged. As Shojaee et al. (2025, Apple) present in The Phantasm of Considering, just lately beneath scrutiny (Lawsen A., 2025, Anthropic), LLMs could seem to purpose effectively, however their efficiency drops considerably when the duties turn out to be too advanced. With out the precise framing and decomposition, the output can simply be shallow or inconsistent.

    It means two issues:

    1. Generic LLMs can’t be used straight for advanced scientific duties. They require particular orchestration, much like frameworks like Microsoft’s MAI-DxO (Nori H. et al., 2025), mixed with state-of-the-art strategies equivalent to retrieval-augmented technology (RAG).

    2. Co-creation is an lively course of. The scientist should stay the conductor, deciding what is sensible, what to check, and what to discard, as a result of the LLM can merely be unsuitable.

    In a means, this mirrors how scientists interplay already works. We learn publications, ask for suggestions from colleagues, seek the advice of consultants in different domains, and revise our ideas alongside the best way.
    Co-creating with GenAI follows the identical logic, besides that this “colleague” is all the time obtainable, extremely educated, and fast to supply options.

    From my expertise as a developer, I usually deal with the language mannequin as a artistic accomplice. It doesn’t simply generate code, it typically shifts my perspective and helps me keep away from errors, accelerating my workflow. As soon as, once I struggled to switch an audio file throughout internet pages, the LLM steered a completely completely different strategy: utilizing IndexedDB. I hadn’t considered it, but it solved my drawback extra effectively than my plan. Not solely did “we” repair the difficulty, I additionally discovered one thing new. And this occurs every day, making me a greater developer.

    In science, I can think about the identical help. Designing a multi-step natural synthesis, the mannequin may counsel:

    “Why not create the alkene with a cross-metathesis strategy as a substitute of a Wittig-Horner? The sturdy primary circumstances of the latter may degrade your intermediate product.”

    To not point out the potential help for youthful researchers:

    “Pre-treat your silica gel with triethylamine earlier than purification. Your hydroxyl defending group is understood to degrade beneath slight acidic circumstances.”

    Such well timed and exact recommendation may save weeks of labor. However the actual worth comes when one engages with the mannequin critically: testing, questioning, asking it to problem our personal reasoning. Used this fashion, GenAI turns into a mirror for our ideas and a challenger to our assumptions. It doesn’t exchange scientific reasoning, it sharpens it, making it extra deliberate, higher documented, and extra artistic.

    Enhancing the scientific methodology with out altering its rules

    GenAI doesn’t exchange the scientific methodology. It respects its core pillars : Speculation, experimentation, interpretation, peer-review. It has the potential to strengthen and speed up every one among them. It’s not about altering how science works, however to help the method at each step:

    • Speculation technology: GenAI might help generate a broader vary of hypotheses by providing new views or transversal level of views. It could problem assumptions and suggest various angles.
    • Experiment choice: It could assist prioritize experiments based mostly on feasibility, price, potential influence, or novelty. Current frameworks such because the AI Scientist (Lu et al., 2024) or ResearchAgent (Baek et al., 2025), already discover this by mapping attainable experimental paths and rating them by scientific worth.
    • Information evaluation and interpretation: GenAI can detect patterns, anomalies, or counsel interpretations that is probably not apparent to researchers. It could additionally join findings with associated research, hyperlink to lab information of a colleague, or spotlight attainable confounding variables.
    • Peer-review simulation: It could reproduce the kind of dialogue that happens in lab conferences or peer-review publication course of. By simulating professional personas (chemist, biologist, information scientist, toxicologist, and many others.), it will probably present structured suggestions from completely different views early within the analysis cycle.

    The scientific methodology is just not mounted, it evolves with new instruments. Simply as computer systems, statistics, and molecular modeling remodeled scientific apply, GenAI can add new construction, pace, and variety of thought. Most significantly, it will probably protect a full reminiscence of reasoning. Each speculation, resolution, or department of exploration may be documented and revisited. This stage of clear and auditable considering is precisely what trendy science must strengthen reproducibility and belief.

    Early Proof-of-Idea Outcomes of GenAI in Science

    The potential of GenAI in science isn’t just a concept, it has already been demonstrated. A number of research by groups from main analysis teams over the previous two years have proven how GenAI can generate and refine strong scientific hypotheses, assist with experiment planning and information interpretation.

    SciMON (Wang et al., 2024) evaluated the flexibility of language fashions to suggest new analysis concepts utilizing retrieval from scientific literature and an LLM. This method begins from a scientific context, retrieves related data, and generates concepts. The concepts are then filtered for novelty and plausibility, finally enriched and evaluated by human consultants. Many had been judged as authentic and promising.

    Equally, SciMuse (Gu & Krenn, 2024) used a big data graph (58 million publications) and an LLM (GPT-4) to counsel new analysis concepts. After analysing a scientist’s publications, the framework chosen associated ideas and proposes attainable initiatives and collaborations. In complete, 4,400 SciMuse generated concepts had been evaluated by over 100 analysis group leaders. A few quarter had been rated as very attention-grabbing, exhibiting that GenAI might help establish sudden promising analysis instructions.

    The latest ResearchAgent framework (Baek et al., 2025) goes past single-step concept technology. It makes use of a number of LLM-powered reviewing brokers to iteratively refine concepts. An preliminary set of hypotheses is generated from the literature, then improved by means of a number of cycles of suggestions and modification. Skilled evaluations confirmed that most of the ultimate concepts weren’t solely related however genuinely novel, reinforcing the outcomes of the earlier frameworks.

    An much more formidable challenge is the AI Scientist (Lu et al., 2024, Yamada, Y. et al., 2025), a framework aiming at absolutely automating the analysis course of. In its first model, hypotheses had been generated from scientific literature and examined by means of simulated environments or computational benchmarks. The second model prolonged it by incorporating suggestions from automated laboratory experiments, exhibiting a path towards absolutely autonomous scientific discovery. This method was not solely a proof of idea: it really produced three scientific publications accepted at peer-reviewed conferences. One among them was even chosen for a significant workshop in artificial biology. After all, such autonomy is barely attainable in domains the place experiments may be carried out completely in-silico. And past feasibility, it isn’t apparent that full automation is even fascinating, because it raises essential moral and epistemological questions concerning the function of human judgment in science.

    Collectively, these works show that, with correct organisation, GenAI can help scientific creativity and reasoning in a structured and credible means. This goes far past information exploitation and underlines a coming paradigm shift in how science will probably be carried out within the coming years.

    The dangers of ignoring this transition

    Scientific data is rising sooner than ever and changing into more and more interconnected throughout disciplines. It is tougher than ever to remain present with new discoveries and, above all, authentic in your analysis. Selecting to discard AI help could widen the hole between what the researcher is aware of and what has been found, decreasing its means to totally course of and interpret its personal information effectively and to innovate. Establishments that delay adoption threat falling behind not solely in productiveness, but additionally in creativity and influence.

    Software program growth already exhibits how GenAI integration within the workflow results in huge effectivity good points. Those that resist utilizing AI instruments are actually considerably much less productive than those that embrace them. In addition they wrestle to maintain up with an evolving toolchain. Science has no purpose to be completely different.

    Ready for flawless AI instruments for science is neither real looking nor vital. Scientific apply has all the time superior by means of imperfect however promising applied sciences. What issues is adopting innovation critically and responsibly, enhancing productiveness and creativity with out compromising scientific rigor. Builders face this problem each day: Producing code sooner with GenAI whereas sustaining requirements of high quality and robustness.

    The true threat is just not utilizing GenAI too early, however being left behind when others have already constructed sturdy workflows and experience with it. Competing with analysis groups that use GenAI successfully will turn out to be more and more tough and shortly, not possible with out comparable instruments.

    It’s time for science to accomplice with GenAI

    The query is now not whether or not GenAI will change science, however when researchers will probably be able to evolve and work with it. The earlier we start treating GenAI as a reasoning accomplice, the earlier we are able to unlock its full potential and push the frontiers of discovery additional, at a time when the tempo of progress is slowing down.

    Current research have highlighted the rising concern throughout scientific disciplines of the diminishing disruptiveness of latest discoveries. Bloom et al. (2020) confirmed that regardless of growing investments in R&D, scientific productiveness is declining. Park et al. (2022) additional demonstrated that scientific publications and patents have turn out to be much less disruptive over time, pointing to a systemic “slowdown” within the technology of novel concepts.

    These findings underline the necessity to rethink how analysis is carried out. GenAI help may provide the highly effective lever analysis must counter this decline, serving to discover broader areas of concepts, bridge disciplines, and push considering past its present limits. It’s not meant to be a shortcut, it’s a catalyst. By embracing GenAI as a co-creator, we might be able to reverse the detrimental pattern and enter a brand new section of exploration, one marked not by diminishing returns, however by renewed ambition and breakthrough considering.

    I don’t say this as an outsider. As a developer, I’ve skilled this shift firsthand. At first, GenAI integration in software program growth instruments was removed from excellent (and it’s nonetheless not). However over time it has turn out to be a real cognitive copilot, one thing I depend on each day, not simply to put in writing code, however to unravel issues, check concepts, and speed up my reasoning. I’m satisfied the identical transformation can occur in science. Not on the experimental stage, since most scientists experiments aren’t in-silico, however as a reasoning amplifier, embedded within the early levels of the scientific course of. It received’t be quick, and it received’t be with out friction. However it can occur, as a result of the potential is just too nice to disregard.

    One widespread worry is that GenAI may make us intellectually lazy. However this concern is just not new. When calculators turned widespread, many anxious we might lose our means to do psychological arithmetic. And in a means, they had been proper, we do fewer calculations “by hand”. However we additionally gained one thing much more helpful: The power to mannequin complexity and develop the attain of utilized arithmetic. The identical is true for GenAI. It might reshape the best way we expect, but it surely additionally frees us to concentrate on higher-order reasoning, not by changing our intelligence, however by extending what we are able to do with it.

    Now could be the time for scientists, establishments, and funders to embrace this new accomplice, and picture what turns into attainable after we purpose with GenAI, not after it.

    The Structure of a New Analysis Paradigm

    GenAI is just not a plug-and-play resolution. Unlocking its potential for scientific analysis requires greater than a powerfull LLM and intelligent prompts. It calls for deliberate system design aligned with scientific objectives, methodical methods to information the reasoning, and deep integration into the precise analysis workflow itself.

    Listed here are three instructions that, to my view, maintain the best promise:

    • Construct scientific copilots that perceive information in context. AI instruments should transcend answering questions, they need to actively help in exploring experimental outcomes. By integrating lab-specific information, historic context, and area data they may spotlight refined patterns, detect inconsistencies, or counsel the following finest experiment. The purpose is just not automation, however deeper perception, achievable by means of a mixture of fine-tuned fashions and superior retrieval-augmented technology (RAG) methods.
    • Create multi-agent frameworks to simulate professional dialogue. Scientific progress isn’t the product of a single viewpoint. It emerges from evaluating, combining, and difficult completely different disciplinary views. GenAI now makes it attainable to simulate this by creating a number of professional brokers, every reasoning from a unique scientific lens and difficult one another’s assumptions. This presents a brand new type of interdisciplinary change, steady, reproducible, and unconstrained by time, availability, or disciplinary boundaries. Microsoft’s MAI-DxO (Nori H. et al., 2025) is a proof of idea.
    • Use GenAI to discover, not simply to automate or optimize. Most present AI instruments goal to streamline duties. Science, nonetheless, additionally wants instruments that diverge, that assist generate sudden concepts, ask out-of-the-box questions, and problem foundational assumptions. Prompting methods like tree-of-thought or role-based agent “reasoning” can push scientists in instructions they wouldn’t usually attain on their very own.

    Collectively, these approaches counsel a future the place GenAI doesn’t exchange scientific reasoning, however expands it. That is now not speculative, it’s already being explored. What’s lacking now are the infrastructure, the openness, and the mindset to deliver them into the core of scientific apply.

    Ethics, belief, and the necessity for open infrastructure

    As GenAI turns into extra deeply embedded within the scientific work, one other “dimension” must be addressed: One which touches on belief, openness, confidentiality and duty. Generic LLMs are largely “black bins” skilled with worldwide information, with out the flexibility to differentiate between dependable and unreliable info. They’ll comprise biases, produce assured hallucinations, and obscure the reasoning behind their outputs. Confidentiality additionally turns into a significant concern when working with delicate analysis information.

    These dangers aren’t particular to science however amplified by the complexity and rigour that scientific analysis requires. In science, reproducibility, auditability, and methodological rigour are non-negotiable, which signifies that AI techniques should be inspectable, adaptable, and improvable. This makes open infrastructure not only a choice however a scientific necessity.

    Ideally, each laboratory would run its personal GenAI infrastructure. In apply, nonetheless, only some will ever have the assets (time, experience and cash) to deal with that stage of complexity. A trusted middleman, whether or not a public establishment or a mission-driven firm, might want to shoulder this duty, guided by clear and clear necessities. Fashions utilized in analysis ought to be open-source, or no less than absolutely auditable; prompts and intermediate outputs traceable; coaching datasets documented and, every time possible, shared. In the end, strong infrastructure will probably be wanted on the nationwide or institutional stage to make sure information possession and governance stay beneath management.

    With out this stage of “openness”, the danger is to create a brand new sort of scientific “black field”. One which will speed up leads to the brief time period but additionally prevents full understanding. In science, the place understanding is so important, this could be extremely detrimental. I’ve witnessed this in software program growth: if the AI generated code is just not absolutely audited by the group, technical debt accumulates till it turns into extraordinarily tough to implement new options and keep. We can not afford to let science fall into an identical entice.

    Belief in GenAI is not going to come from blind adoption, however from accountable integration. AI instruments should be formed to suit scientific requirements, not the opposite means round. And whereas some coaching might help, scientists, naturally curious and adaptive, will doubtless be taught to make use of these instruments as they as soon as did with the web, software program, and many others..

    In the long term, the credibility of GenAI-augmented science will rely upon this transparency. Openness isn’t just a technical requirement, it’s a matter of scientific ethics.

    Conclusion: Science can’t afford to attend, but it surely should use GenAI responsibly.

    Scientists shouldn’t see GenAI as a menace to their function, nor as a supply of absolute reality. It ought to be thought of a collaborator: One which works tirelessly, supplies instantaneous entry to broad experience and constructive criticism.

    The scientific methodology doesn’t have to be changed however it may be prolonged, enriched, and accelerated by GenAI. Not simply by instruments that automate duties, however by techniques that assist us ask sharper questions, take into account various views, check bolder hypotheses, and interact extra deeply with complexity.

    Sure, the dangers of misinformation are actual however errors or superficial outputs are inherent to each human and synthetic collaborators. As with every methodology in science, what issues most is making use of it with rigor and demanding considering.

    Sure, the dangers of changing into extra passive in our technique of creation and reasoning are actual too. Equally, it’s our duty to make use of GenAI correctly as an amplifier of our ideas and never a substitute.

    In my expertise as a developer, co-creating with GenAI has remodeled how issues are approached and solved. With a scientific background, I’m satisfied that analysis can observe an identical trajectory. Totally different, as a result of most scientific experiments stay in human palms, whereas code can already be written by AI, however maybe much more profound on the strategy, given the exploratory nature and inherent complexity of science.

    Belief in GenAI-augmented science relies on transparency, accountability, and reliability. It requires open and auditable AI techniques skilled on high quality information with particularly designed structure and supported by infrastructures that respect information sovereignty. Scientists should retain the mental authority, remaining the guiding power of outcomes interpretation and reasoning.

    What we want now’s concrete implementation with purposely constructed instruments for science, in addition to researchers prepared to discover, check, and form new methods of considering. Establishments should help this transition, not with inflexible expectations, however with house for iteration.

    In the end, the following frontier of science is just not solely data itself, however the brand new strategies we create to generate it.

    References

    Baek, J., Jauhar, S. Okay., Cucerzan, S., & Hwang, S. J. (2025). ResearchAgent: Iterative Analysis Thought Technology over Scientific Literature with Giant Language Fashions. arXiv preprint arXiv:2404.07738. https://arxiv.org/abs/2404.07738

    Bloom, N., Jones, C. I., Van Reenen, J., & Webb, M. (2020). Are Concepts Getting More durable to Discover? American Financial Assessment, 110(4), 1104–1144. https://doi.org/10.1257/aer.20180338

    Gu, X., Krenn, M. (2024). Attention-grabbing Scientific Thought Technology utilizing Data Graphs and LLMs: Evaluations with 100 Analysis Group Leaders. arXiv arXiv:2405.17044. https://doi.org/10.48550/arXiv.2405.17044

    Kumar, S., Ghosal, T., Goyal, V., & Ekbal, A. (2024). Can Giant Language Fashions Unlock Novel Scientific Analysis Concepts? arXiv preprint arXiv:2409.06185. https://arxiv.org/abs/2409.06185

    Lawsen, A. (2025). Touch upon The Phantasm of Considering: Understanding the Strengths and Limitations of Reasoning Fashions through the Lens of Drawback Complexity. arXiv preprint arXiv:2506.09250. https://doi.org/10.48550/arXiv.2506.09250

    Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., & Ha, D. (2024). The AI Scientist: In direction of Totally Automated Open-Ended Scientific Discovery. arXiv arXiv:2408.06292v3. https://doi.org/10.48550/arXiv.2408.06292

    Luo, Z., Yang, Z., Xu, Z., Yang, W., & Du, X. (2025). LLM4SR: A Survey on Giant Language Fashions for Scientific Analysis. arXiv arXiv:2501.04306. https://doi.org/10.48550/arXiv.2501.04306

    Nori, H., Daswani, M., Kelly, C., Lundberg, S., Ribeiro, M. T., Wilson, M., Liu, X., Sounderajah, V., Carlson, J., Lungren, M. P., Gross, B., Hames, P., Suleyman, M., King, D., & Horvitz, E. (2025). Sequential Prognosis with Language Fashions. arXiv preprint arXiv:2506.22405v1. https://doi.org/10.48550/arXiv.2506.22405

    Park, M., Leahey, E., & Funk, R. J. (2023). Papers and patents have gotten much less disruptive over time. Nature, 613, 138–144. https://doi.org/10.1038/s41586-022-05543-x.

    Roos, M., Pradère, U., Ngondo, R. P., Behera, A., Allegrini, S., Civenni, G., Zagalak, J. A., Marchand, J.‑R., Menzi, M., Towbin, H., Scheuermann, J., Neri, D., Caflisch, A., Catapano, C. V., Ciaudo, C., & Corridor, J. (2016). A small‑molecule inhibitor of Lin28. ACS Chemical Biology, 11(10), 2773–2781. https://doi.org/10.1021/acschembio.6b00232

    Si, C., Yang, D., & Hashimoto, T. (2024). Can LLMs Generate Novel Analysis Concepts? A Giant-Scale Human Examine with 100+ NLP Researchers. arXiv preprint arXiv:2409.04109. https://arxiv.org/abs/2409.04109

    Shojaee, P., Mirzadeh, I., Alizadeh, Okay., Horton, M., Bengio, S. & Farajtabar, M. (2025). The Phantasm of Considering: Understanding the Strengths and Limitations of Reasoning Fashions through the Lens of Drawback Complexity. arXiv preprint arXiv:2506.06941v1. https://doi.org/10.48550/arXiv.2506.06941

    Wang, Q., Downey, D., Ji, H., & Hope, T. (2024). SciMON: Scientific Inspiration Machines Optimized for Novelty. arXiv preprint arXiv:2305.14259. https://doi.org/10.48550/arXiv.2305.14259

    Yamada, Y., Lange, R.T., Lu, C., Hu, S., Lu, C., Foerster, J., Clune, J., Ha D. (2025). The AI Scientist-v2: Workshop-Stage Automated Scientific Discovery through Agentic Tree Search. arXiv:2504.08066. https://doi.org/10.48550/arXiv.2504.08066



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSystematic LLM Prompt Engineering Using DSPy Optimization
    Next Article Can large language models figure out the real world? | MIT News
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Creating AI that matters | MIT News

    October 21, 2025
    Artificial Intelligence

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025
    Artificial Intelligence

    Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know

    October 21, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Understanding Random Forest using Python (scikit-learn)

    May 16, 2025

    What It Means and Where It’s Headed

    April 10, 2025

    Bridging philosophy and AI to explore computing ethics | MIT News

    April 5, 2025

    Taking ResNet to the Next Level

    July 3, 2025

    California’s Bar Exam Was Written by AI And It Was a Total Disaster

    May 1, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Ny forskning visar varför AI-bilder ser så konstiga ut

    October 21, 2025

    Work Data Is the Next Frontier for GenAI

    July 9, 2025

    How to Use Frontier Vision LLMs: Qwen3-VL

    October 20, 2025
    Our Picks

    Creating AI that matters | MIT News

    October 21, 2025

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025

    Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know

    October 21, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.