We introduce GeneGPT, a novel technique within this paper, empowering LLMs to interact with NCBI's Web APIs for resolving genomics queries. Codexes's capacity to address GeneTuring tests through NCBI Web APIs is achieved through in-context learning, along with an augmented decoding algorithm capable of identifying and carrying out API calls. In the GeneTuring benchmark, experimental results reveal GeneGPT's exceptional performance on eight tasks, obtaining an average score of 0.83. This significantly surpasses retrieval-augmented LLMs like Bing (0.44), biomedical LLMs BioMedLM (0.08) and BioGPT (0.04), and other models like GPT-3 (0.16) and ChatGPT (0.12). Further investigation of the data suggests that (1) API demonstrations exhibit strong cross-task generalizability, surpassing documentation in supporting in-context learning; (2) GeneGPT effectively generalizes to longer sequences of API calls and accurately answers multi-hop queries in the novel GeneHop dataset; (3) Distinct error types are prominent in specific tasks, providing valuable guidance for future improvements.
The complex interactions and effects of competition are central to understanding species coexistence and biodiversity in ecological systems. Consumer Resource Models (CRMs) have, historically, been a subject of analysis using geometric approaches to this question. This has resulted in generally applicable concepts, including Tilman's $R^*$ and species coexistence cones. We further elaborate on these arguments by introducing a novel geometric model for species coexistence, rooted in the concept of convex polytopes within the consumer preference landscape. We expose the capacity of consumer preference geometry to foresee species coexistence, to list stable ecological equilibrium points, and to delineate transitions among them. Taken together, these outcomes delineate a novel, qualitative understanding of the role played by species traits in the formulation of ecosystems, incorporating niche theory.
Transcription typically occurs in a series of bursts, with periods of high activity (ON) interleaved with inactive (OFF) phases. The issue of how transcriptional bursts control the precise spatial and temporal characteristics of transcriptional activity remains unsolved. In the fly embryo, live transcription imaging allows us to examine key developmental genes, with the precision of a single polymerase. check details Quantifying single-allele transcription rates and multi-polymerase bursts demonstrates consistent bursting patterns throughout all genes, both temporally and spatially, while considering cis and trans perturbations. We attribute the transcription rate primarily to the allele's ON-probability, noting that changes in the transcription initiation rate remain constrained. An established ON-probability dictates a particular average ON and OFF time, thereby preserving a consistent characteristic burst duration. Our analysis reveals a convergence of regulatory processes, impacting the likelihood of the ON-state, and predominantly controlling mRNA production, avoiding modulation of the specific ON and OFF durations for each mechanism. check details Our findings thus encourage and steer subsequent investigations into the mechanisms enacting these bursting rules and regulating transcriptional processes.
Patient alignment in some proton therapy facilities is accomplished through the use of two orthogonal 2D kV images, acquired from fixed oblique angles, due to the unavailability of in-situ 3D imaging technology. The tumor's visibility within kV images is restrained by the conversion of the patient's three-dimensional form to a two-dimensional projection, especially when it lies concealed behind high-density structures, such as bone. This factor can contribute to considerable mistakes in the patient's setup procedure. Using the kV images taken at the treatment isocenter during the treatment position, the 3D CT image reconstruction is a solution.
A network, built from vision transformer blocks and having an asymmetric architecture, was constructed, emulating an autoencoder. Data from a single head and neck patient was collected using 2 orthogonal kV images (1024×1024 voxels), 1 3D CT scan with padding (512x512x512 voxels) taken on the in-room CT-on-rails before kV exposures, and 2 digitally reconstructed radiographs (DRRs) (512×512 voxels) based on the CT scan. A dataset of 262,144 samples was formed by resampling kV images with an 8-voxel interval and DRR and CT images with a 4-voxel interval. Each image in this dataset possessed a 128-voxel dimension in each spatial direction. Training involved simultaneous use of kV and DRR images, requiring the encoder to develop a unified feature map encompassing both modalities. The testing procedures incorporated the use of only independent kV imaging data. The model's output of sCTs was arranged according to their spatial data, allowing for their concatenation to create the full-size synthetic CT (sCT). The image quality of the synthetic computed tomography (sCT) was assessed using both mean absolute error (MAE) and the volume histogram of per-voxel absolute CT number differences (CDVH).
With regards to speed, the model performed at 21 seconds, achieving a MAE of under 40HU. Analysis of the CDVH data indicated that less than 5% of voxels displayed a per-voxel absolute CT number variation greater than 185 HU.
The development and validation of a vision-transformer-based network, customized for individual patients, demonstrated accuracy and efficiency in the reconstruction of 3D CT images from kV radiographic data.
A network, specifically designed for each patient's anatomy using vision transformers, was developed and validated as accurate and efficient for reconstructing 3D CT images from lower-energy kV images.
The manner in which the human brain interprets and processes information deserves meticulous consideration. Human brain responses to images were investigated with functional MRI, focusing on selectivity and the divergence between individuals. Our first experiment demonstrated that images predicted to attain maximum activation using a group-level encoding model resulted in stronger responses than images anticipated to reach average activation, with the magnitude of the activation increase positively linked to the accuracy of the encoding model. Furthermore, aTLfaces and FBA1 demonstrated stronger activation patterns in response to the highest resolution synthetic images, when compared to the highest resolution natural images. Our second experiment demonstrated that synthetic images generated by a personalized encoding model yielded a stronger response than those produced by group-level or other subject encoding models. A further replication of the finding demonstrated aTLfaces' bias towards synthetic images as opposed to natural images. Analysis of our results points towards the viability of employing data-driven and generative methods to regulate macro-scale brain region activity and examine individual differences in the human visual system's functional specializations.
Subject-specific models in cognitive and computational neuroscience, while performing well on their training subject, usually fail to generalize accurately to other individuals due to individual variances. An individual-to-individual neural conversion system, if designed optimally, is anticipated to produce authentic neural signals from one person, mimicking those of another, thereby addressing the issue of individual variation in the context of cognitive and computational modeling. This research presents a groundbreaking individual-to-individual EEG converter, designated as EEG2EEG, drawing on the principles of generative models within computer vision. The THINGS EEG2 dataset facilitated the training and testing of 72 individual EEG2EEG models, corresponding to 72 pairs across the 9 subjects. check details EEG2EEG's performance in learning the correspondence of neural representations from one individual's EEG signals to another's is highlighted by our results, indicating a high degree of conversion accuracy. Additionally, the EEG signals manifest more precise portrayals of visual information when contrasted with the information that can be obtained from genuine data. The methodology detailed here establishes a novel and advanced framework for converting EEG signals into neural representations. This framework provides a flexible and high-performance mapping from one individual's brain to another, offering insights for both neural engineering and cognitive neuroscience.
The act of a living thing interacting with its environment is inherently a wagering act. Partially aware of a stochastic world, the organism must select its next action or short-term method, an action that inherently or overtly relies on an assumed representation of the world's state. By providing more robust environmental statistics, the accuracy of betting can be improved; nevertheless, practical limitations on information acquisition resources often persist. Our argument is that theories of optimal inference highlight the increased difficulty in inferring models characterized by 'complexity', leading to greater predictive error when resources are constrained. We propose a principle of cautious action, or 'playing it safe,' where, with restricted information acquisition, biological systems should lean towards simpler models of their environment, leading to less risky investment strategies. We find, using Bayesian inference, that the Bayesian prior dictates a uniquely optimal strategy for safe adaptation. The implementation of our “playing it safe” principle within the context of stochastic phenotypic switching in bacteria proves to lead to an improved fitness (population growth rate) of the bacterial population. The broad applicability of this principle to adaptive, learning, and evolutionary processes is suggested, highlighting the environments where organisms find success and thrive.
Neocortical neuron spiking activity demonstrates surprising variability, even when the networks process identical stimuli. The hypothesis posits that these neural networks operate in an asynchronous state, owing to the approximately Poissonian firing of neurons. Independent firing of neurons characterizes the asynchronous state, making the likelihood of synchronous synaptic input to a single neuron exceptionally low.