Aaditya-Nanda/pegasus-samsum
This model is a fine-tuned dialogue summarization model based on google/pegasus-xsum.
Model Details
- Base model:
google/pegasus-xsum - Task: dialogue summarization
- Dataset used for training:
knkarthick/dialogsum - Framework: Hugging Face Transformers
Intended Use
This model is intended to generate short abstractive summaries from multi-turn conversations.
Training Notes
- Training epochs:
1 - Train subset size:
2000 - Validation subset size:
200 - Max input length:
512 - Max target length:
128
Limitations
- This model was trained as a lightweight project run and may miss details in longer or more complex conversations.
- The repository name still uses
samsumfor compatibility with the deployed app, but the fine-tuning dataset isDialogSum.
Example
Input:
Riya: Can you send me the draft by 6 PM?
Karan: Yes, I will finish the conclusion and add the March sales numbers first.
Riya: Great, I will review it and prepare the slides.
Expected summary:
Karan will send the updated draft by 6 PM, and Riya will review it before preparing the slides.
- Downloads last month
- 52
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support