Generative_Image_Rotation: Using Pix2Pix cGAN to transform randomly oriented Protoplanetary Disk images into standardized face-on views for astronomical research.
-
Updated
Aug 2, 2024 - Jupyter Notebook
Generative_Image_Rotation: Using Pix2Pix cGAN to transform randomly oriented Protoplanetary Disk images into standardized face-on views for astronomical research.
The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework Join our Community: https://discord.com/servers/agora-999382051935506503
Tensorflow implementation of a 3D-CNN U-net with Grid Attention and DSV for pancreas segmentation trained on CT-82.
Modular Python implementation of encoder-only, decoder-only and encoder-decoder transformer architectures from scratch, as detailed in Attention Is All You Need.
FlashAttention (Metal Port)
Algorithm for stroke occlusion detection. Work in progress. In the context of France 2030 project on stroke research (BOOSTER).
Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.
ACDFSL for Hyperspectral Image Classification
AG-MAE: Anatomically Guided Spatio-Temporal Masked Auto-Encoder for Online Hand Gesture Recognition
The attention heads in the Transformer architecture possess a variety of capabilities. This is a carefully compiled list that summarizes the diverse functions of the attention heads.
[BMVC 2024] Official Impelementation of MSA^2 Net: Multi-scale Adaptive Attention-guided Network for Medical Image Segmentation
Visualize BERT's attention mechanism with a user-friendly script. Input text with a masked token, predict the masked word, and generate attention diagrams to understand BERT's focus. Ideal for AI enthusiasts and NLP researchers.
A simple but complete full-attention transformer with a set of promising experimental features from various papers
An implementation of the GPT(generative pretrained transformer) model, from scratch, which produces Shakespearean text by training on the dialogues written by Shakespeare along with the GPT Encoder.
This project is a Flask web application that allows users to upload images and generate captions for them using a custom AI model. The model utilizes EfficientNet for the Convolutional Neural Network (CNN) component, a custom Long Short-Term Memory (LSTM) network, and a multihead attention layer. The model has an accuracy of 42%.
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
I implemented 3HAN(Hierarchical Attention Network)for fake news detection in pytorch. The same model can be modified and trained for different text classification tasks.
This project aims to simplify texts from research papers using advanced natural language processing (NLP) techniques, making them more accessible to a broader audience
Code for the paper: Mixed Models with Multiple Instance Learning
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
Add a description, image, and links to the attention-mechanism topic page so that developers can more easily learn about it.
To associate your repository with the attention-mechanism topic, visit your repo's landing page and select "manage topics."