Skip to content
View MingyuJ666's full-sized avatar
💭
I may be slow to respond.
💭
I may be slow to respond.
  • Rutgers University

Block or report MingyuJ666

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
MingyuJ666/README.md
  • 👋 Hi, I’m Mingyu Jin, PhD student at Rutgers University
  • 👀 I’m interested in Trustworthy Large Language Models, Explainability, and Data Mining.
  • 🌱 I’m currently learning about interpretability in transformers.
  • 💞️ I’m looking to collaborate with students who are also interested in these areas
  • 📫 How to reach me: [email protected]

Pinned Loading

  1. Stockagent Public

    [Preprint] Large Language Model-based Stock Trading in Simulated Real-world Environments

    Python 162 34

  2. ProLLM Public

    [COLM'24] We propose Protein Chain of Thought (ProCoT), which replicates the biological mechanism of signaling pathways as language prompts. It considers a signaling pathway as a protein reasoning …

    Python 58 5

  3. Disentangling-Memory-and-Reasoning Public

    [preprint] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.

    Python 43 2

  4. The-Impact-of-Reasoning-Step-Length-on-Large-Language-Models Public

    [ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correlation between the effectiveness of CoT and the length of reas…

    Jupyter Notebook 43 4

  5. Luckfort/CD Public

    [COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?

    Python 70 4

  6. Rope_with_LLM Public

    [preprint] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concentrated in low-frequency dimensions across different attenti…

    Jupyter Notebook 8

193 contributions in the last year

Contribution Graph
Day of Week March April May June July August September October November December January February March
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More

Contribution activity

March 2025

Created 1 commit in 1 repository
Created 1 repository

Opened their first pull request on GitHub in agiresearch/Cerebrum Public

MingyuJ666 created their first pull request!

First pull request

ollama test call llm api Public
Loading