About

I am a doctoral student in the Language Technologies Institute at Carnegie Mellon University where I am advised by Yonatan Bisk and Emma Strubell.

My research interests are in characterizing and evaluating the impact of machine learning workloads, with the goal of designing computationally and energy efficient systems for artificial intelligence. My research is generously supported by the NSF Graduate Research Fellowship.

Previously, I received my B.S. in Computer Science and B.S. in Electrical Engineering at Northwestern University where I worked with Doug Downey and Thrasos Pappas. I’ve also spent time at Google, Meta FAIR, and the Allen Institute for Artificial Intelligence.

Email: jaredfern [at] cmu.edu

Out-of-Date News:
$\quad$ [April 2025] $\quad$ Preliminary work won a Best Proposal Award at the CCAI Workshop at ICLR!
$\quad$ [April 2025] $\quad$ Presented joint work on low-power distributed training at SIGBOVIK!
$\quad$ [Nov. 2024] $\quad$ Presented our paper on continual pretraining for LLMs at EMNLP!
$\quad$ [June 2024] $\quad$ Interning at Meta FAIR studying the efficiency of distributed training!
$\quad$ [Dec. 2023] $\quad$ Presented work on deep learning framework efficiency at EMNLP!

Selected Publications

Energy Considerations of Large Language Model Inference and Efficiency Optimizations

The 63rd Annual Meeting of the Association for Computational Linguistics (ACL), 2025.

Efficient Hardware Scaling and Diminishing Returns in Large-Scale Training of Language Models

Transactions on Machine Learning Research, 2025.

Holistically Evaluating the Environmental Impact of Creating Language Models

The Thirtheenth International Conference on Learning Representations (ICLR), 2025.

Gradient Localization Improves Lifelong Pretraining of Language Models

Findings of the Association for Computational Linguistics: EMNLP (EMNLP Findings), 2024.

The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.