All Articles
MethodologyMarch 5, 20267 min readPrioris Research

Value of Information: How to Score Intelligence That Actually Matters

Not all information is equally valuable. Value of Information (VoI) scoring quantifies what matters — combining relevance, novelty, reliability, and decision impact into a single actionable metric.

Every day, hundreds of news articles, research papers, regulatory updates, and data releases compete for your attention. Most are noise. A few are signal. The challenge is distinguishing between them — quickly, consistently, and at scale.

Value of Information (VoI) scoring is the answer. Rooted in decision theory, VoI quantifies how much a piece of information would change your optimal decision if you had it. High VoI items shift your strategy. Low VoI items confirm what you already know.

The VoI framework

VoI scoring combines five dimensions:

Relevance (domain match). How closely does this item relate to your monitored domains? A cybersecurity advisory is high-relevance for a CISO, low-relevance for a dermatologist — unless that advisory affects medical device software.

Novelty. Does this item contain genuinely new information, or is it a rehash of previously reported findings? Semantic similarity against your existing knowledge base reveals true novelty.

Source reliability. A Nature paper and a blog post might report the same finding, but their reliability differs by orders of magnitude. Bayesian source priors, updated by track record, quantify this difference.

Recency. Information decays. A zero-day vulnerability disclosure is maximally valuable in the first hours. A macroeconomic trend report retains value for weeks. Time-to-live (TTL) captures this decay.

Decision impact. The highest-VoI items are those that would change a pending decision. If you are evaluating a vendor, a security audit finding about that vendor has high decision impact. If you have no pending vendor decisions, the same finding has lower VoI for you specifically.

Scoring in practice

Prioris computes VoI scores using a combination of LLM-based sub-scoring and mathematical aggregation:

  1. Each ingested item receives five sub-scores (0-100) from a language model: domain match, novelty, source reliability, recency, and category classification
  2. Sub-scores are combined using a weighted formula calibrated against user feedback
  3. Items scoring above the VoI threshold are embedded in a vector space for cross-domain connection
  4. High-VoI items receive deep analysis: claim extraction, entity linking, and contradiction detection

Why VoI beats keyword alerts

Keyword alerts are binary: match or no match. They generate false positives (any mention of your keyword, regardless of context) and miss semantic matches (articles about the same topic using different terminology).

VoI scoring is continuous and contextual. It accounts for who you are, what decisions you face, and what you have already seen. The same article receives different VoI scores for different users — because its value depends on the recipient's context.

Calibration and learning

VoI scores improve through feedback. When users engage with high-scored items (read, save, share), the system confirms its scoring was accurate. When users ignore high-scored items or engage with low-scored ones, the system recalibrates domain weights and scoring parameters.

This creates a virtuous cycle: better scoring leads to better engagement, which produces better feedback, which improves scoring further. After a few weeks of use, VoI scoring becomes highly personalized.

VoIscoringdecision theorymethodology

Get intelligence like this in your inbox

Prioris delivers personalized intelligence briefings from 500+ open sources across 26 domains.

Start Free Trial