Marketplace Model: Summarize Text Standard
From Minibase
Summarize Text Standard
Purpose: Automatically generate concise summaries of English-language text while preserving the main ideas and context.
Training: Trained on ~50k English-language examples of human-written and synthetic summaries across diverse domains.
Primary value: Produces accurate, coherent summaries with low latency, suitable for real-time or batch applications.
⸻
Intended Use
• Use cases: news/article summarization, meeting notes, chat transcripts, customer support logs, compliance documentation, executive briefings.
• Users: knowledge workers, researchers, analysts, customer support teams, product developers.
• Input: English text (UTF-8).
• Output: Concise summaries in English, configurable by length and style.
Out-of-Scope
• Summarizing non-English content.
• Generating interpretations beyond the input text.
• Acting as a fact-checker or validator.
⸻
Model Details
• Task: Extractive + abstractive summarization.
• Language: English only.
Training: Trained on ~50k English-language examples of human-written and synthetic summaries across diverse domains.
Primary value: Produces accurate, coherent summaries with low latency, suitable for real-time or batch applications.
⸻
Intended Use
• Use cases: news/article summarization, meeting notes, chat transcripts, customer support logs, compliance documentation, executive briefings.
• Users: knowledge workers, researchers, analysts, customer support teams, product developers.
• Input: English text (UTF-8).
• Output: Concise summaries in English, configurable by length and style.
Out-of-Scope
• Summarizing non-English content.
• Generating interpretations beyond the input text.
• Acting as a fact-checker or validator.
⸻
Model Details
• Task: Extractive + abstractive summarization.
• Language: English only.
Basic Information
Base Model:Standard Base
Created by:Michaelminibase
Times imported:429
Released:Sep 26, 2025
Model Size:368 MB
Model Type:Causal Language Model
Format:HIGH
Technical Details
Hidden Size:960
Hidden Layers:32
Attention Heads:15
Vocabulary Size:49,152
Max Context Length:8,192 tokens
Precision:BFloat16 (BF16)
Learning Rate:0.000050
Training Epochs:3
Effective Batch Size:16
Optimizer:AdamW
Training Datasets
| Name | Type | Examples | Size |
|---|---|---|---|
| Summarize Text (Part 1) | SFT | 10,000 | 24.7 MB |
| Summarize Text (Part 2) | SFT | 10,000 | 24.5 MB |
| Summarize Text (Part 3) | SFT | 10,000 | 24.9 MB |
| Summarize Text (Part 4) | SFT | 10,000 | 24.4 MB |
| Summarize Text (Part 5) | SFT | 10,000 | 24.3 MB |