Compare commits

..

22 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
2ce864fa81 Make dashboard queries backward compatible to show data from both forensic and failure indexes
Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>
2026-02-20 21:38:45 +00:00
copilot-swe-agent[bot]
423e0611c5 Fix Splunk sourcetype to use colon separator (dmarc:failure) matching original convention
Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>
2026-02-20 21:05:08 +00:00
copilot-swe-agent[bot]
195fdaf7b2 Add DMARCbis field validation, preserve pass disposition, add comprehensive tests
Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>
2026-02-20 21:03:23 +00:00
copilot-swe-agent[bot]
447f452735 Rename forensic→failure in cli.py, docs, dashboards; add DMARCbis fields to ES/OS output
Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>
2026-02-20 21:01:16 +00:00
copilot-swe-agent[bot]
148f4c87a9 Rename "forensic" to "failure" in docs and dashboard configs
Update documentation files (output.md, usage.md, kibana.md, splunk.md,
elasticsearch.md, index.md, example.ini) and dashboard configurations
(Grafana JSON, Kibana ndjson, Splunk XML) to use "failure" terminology
instead of "forensic", consistent with the codebase rename.

- CLI args: --forensic-* → --failure-*
- Config keys: save_forensic → save_failure, forensic_topic → failure_topic, etc.
- Index names: dmarc_forensic → dmarc_failure
- Splunk dashboard: renamed file from dmarc_forensic_dashboard.xml to dmarc_failure_dashboard.xml
- Backward-compat note preserved: "formerly known as forensic reports"

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-20 20:57:18 +00:00
copilot-swe-agent[bot]
89fdbd82b0 Rename forensic references to failure in cli.py
- Rename all forensic_* variables to failure_*
- Update CLI argument names (--forensic-* to --failure-*)
- Update default filenames (forensic.json/csv to failure.json/csv)
- Update function calls to match renamed output module functions
- Update index names (dmarc_forensic to dmarc_failure)
- Update report type strings and dict keys
- Add backward-compatible config key reading (accept both old and new names)
- Update help text and log messages

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-20 20:53:20 +00:00
copilot-swe-agent[bot]
f019096e5f Rename forensic to failure in output/integration modules
Rename all 'forensic' references to 'failure' in the output modules:
- elastic.py, opensearch.py, splunk.py, kafkaclient.py, syslog.py,
  gelf.py, webhook.py, loganalytics.py, s3.py

Changes include:
- Function/method names: save_forensic_* → save_failure_*
- Variable/parameter names: forensic_* → failure_*
- Class names: _ForensicReportDoc → _FailureReportDoc,
  _ForensicSampleDoc → _FailureSampleDoc
- Index/topic/sourcetype names: dmarc_forensic → dmarc_failure
- Log messages and docstrings updated
- Import statements updated to use new names from core module
- Backward-compatible aliases added at end of each file
- DMARCbis aggregate fields added to elastic.py and opensearch.py:
  np (Keyword), testing (Keyword), discovery_method (Keyword),
  generator (Text)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-20 20:47:24 +00:00
copilot-swe-agent[bot]
6660be2c8c Align DMARCbis fields with actual XSD schema: testing, discovery_method, generator, human_result; handle namespaced XML
Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>
2026-02-20 20:40:37 +00:00
copilot-swe-agent[bot]
e09b8506fa Add DMARCbis fields (np, psd, t) to aggregate reports and rename forensic→failure in core parsing
Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>
2026-02-20 20:35:46 +00:00
copilot-swe-agent[bot]
c6413a4a4c Rename forensic references to failure with backward-compatible aliases
- Rename parse_forensic_report -> parse_failure_report
- Rename parsed_forensic_reports_to_csv_rows -> parsed_failure_reports_to_csv_rows
- Rename parsed_forensic_reports_to_csv -> parsed_failure_reports_to_csv
- Update all internal variable names (forensic_report -> failure_report, etc.)
- Change report_type from 'forensic' to 'failure'
- Use FailureReport type instead of ForensicReport
- Use InvalidFailureReport instead of InvalidForensicReport in function bodies
- Update all docstrings and log messages
- Add backward-compatible aliases at end of file

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-02-20 20:33:52 +00:00
copilot-swe-agent[bot]
81fba4b8d2 Initial plan 2026-02-20 20:21:44 +00:00
Sean Whalen
33eb2aaf62 9.1.0
## Enhancements

- Add TCP and TLS support for syslog output. (#656)
- Skip DNS lookups in GitHub Actions to prevent DNS timeouts during tests timeouts. (#657)
- Remove microseconds from DMARC aggregate report time ranges before parsing them.
2026-02-20 14:36:37 -05:00
Sean Whalen
1387fb4899 9.0.11
- Remove microseconds from DMARC aggregate report time ranges before parsing them.
2026-02-20 14:27:51 -05:00
Copilot
4d97bd25aa Skip DNS lookups in GitHub Actions to prevent test timeouts (#657)
* Add offline mode for tests in GitHub Actions to skip DNS lookups

Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>
2026-02-18 18:19:28 -05:00
Copilot
17a612df0c Add TCP and TLS transport support to syslog module (#656)
- Updated parsedmarc/syslog.py to support UDP, TCP, and TLS protocols
- Added protocol parameter with UDP as default for backward compatibility
- Implemented TLS support with CA verification and client certificate auth
- Added retry logic for TCP/TLS connections with configurable attempts and delays
- Updated parsedmarc/cli.py with new config file parsing
- Updated documentation with examples for TCP and TLS configurations

Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>

* Remove CLI arguments for syslog options, keep config-file only

Per user request, removed command-line argument options for syslog parameters.
All new syslog options (protocol, TLS settings, timeout, retry) are now only
available via configuration file, consistent with other similar options.

Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>

* Fix code review issues: remove trailing whitespace and add cert validation

- Removed trailing whitespace from syslog.py and usage.md
- Added warning when only one of certfile_path/keyfile_path is provided
- Improved error handling for incomplete TLS client certificate configuration

Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>

* Set minimum TLS version to 1.2 for enhanced security

Explicitly configured ssl_context.minimum_version = TLSVersion.TLSv1_2
to ensure only secure TLS versions are used for syslog connections.

Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: seanthegeek <44679+seanthegeek@users.noreply.github.com>
2026-02-18 18:12:59 -05:00
Blackmoon
221bc332ef Fixed a typo in policies.successful_session_count (#654) 2026-02-09 13:57:11 -05:00
Sean Whalen
a2a75f7a81 Fix timestamp parsing in aggregate report by removing fractional seconds 2026-01-21 13:08:48 -05:00
Anael Mobilia
50fcb51577 Update supported Python versions in docs + readme (#652)
* Update README.md

* Update index.md

* Update python-tests.yml
2026-01-19 14:40:01 -05:00
Sean Whalen
dd9ef90773 9.0.10
- Support Python 3.14+
2026-01-17 14:09:18 -05:00
Sean Whalen
0e3a4b0f06 9.0.9
Validate that a string is base64-encoded before trying to base64 decode it. (PRs #648 and #649)
2026-01-08 13:29:23 -05:00
maraspr
343b53ef18 remove newlines before b64decode (#649) 2026-01-08 12:24:20 -05:00
maraspr
792079a3e8 Validate that string is base64 (#648) 2026-01-08 10:15:27 -05:00
33 changed files with 1136 additions and 516 deletions

View File

@@ -30,7 +30,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13", "3.14"]
steps:
- uses: actions/checkout@v5

View File

@@ -1,5 +1,23 @@
# Changelog
## 9.1.0
## Enhancements
- Add TCP and TLS support for syslog output. (#656)
- Skip DNS lookups in GitHub Actions to prevent DNS timeouts during tests timeouts. (#657)
- Remove microseconds from DMARC aggregate report time ranges before parsing them.
## 9.0.10
- Support Python 3.14+
## 9.0.9
### Fixes
- Validate that a string is base64-encoded before trying to base64 decode it. (PRs #648 and #649)
## 9.0.8
### Fixes

View File

@@ -61,4 +61,4 @@ for RHEL or Debian.
| 3.11 | ✅ | Actively maintained; supported until June 2028 (Debian 12) |
| 3.12 | ✅ | Actively maintained; supported until May 2035 (RHEL 10) |
| 3.13 | ✅ | Actively maintained; supported until June 2030 (Debian 13) |
| 3.14 | | Not currently supported due to [this imapclient bug](https://github.com/mjs/imapclient/issues/618)|
| 3.14 | | Actively maintained |

1
ci.ini
View File

@@ -3,6 +3,7 @@ save_aggregate = True
save_forensic = True
save_smtp_tls = True
debug = True
offline = True
[elasticsearch]
hosts = http://localhost:9200

View File

@@ -214,7 +214,7 @@ Kibana index patterns with versions that match the upgraded indexes:
1. Login in to Kibana, and click on Management
2. Under Kibana, click on Saved Objects
3. Check the checkboxes for the `dmarc_aggregate` and `dmarc_forensic`
3. Check the checkboxes for the `dmarc_aggregate` and `dmarc_failure`
index patterns
4. Click Delete
5. Click Delete on the conformation message

View File

@@ -2,7 +2,7 @@
[general]
save_aggregate = True
save_forensic = True
save_failure = True
[imap]
host = imap.example.com
@@ -29,14 +29,3 @@ token_file = /etc/example/token.json
include_spam_trash = True
paginate_messages = True
scopes = https://www.googleapis.com/auth/gmail.modify
[msgraph]
auth_method = ClientSecret
client_id = 12345678-90ab-cdef-1234-567890abcdef
client_secret = your-client-secret-here
tenant_id = 12345678-90ab-cdef-1234-567890abcdef
mailbox = dmarc-reports@example.com
# Use standard folder names - they work across all locales
# and avoid "Default folder Root not found" errors
reports_folder = Inbox
archive_folder = Archive

View File

@@ -34,7 +34,7 @@ and Valimail.
## Features
- Parses draft and 1.0 standard aggregate/rua DMARC reports
- Parses forensic/failure/ruf DMARC reports
- Parses failure/ruf DMARC reports
- Parses reports from SMTP TLS Reporting
- Can parse reports from an inbox over IMAP, Microsoft Graph, or Gmail API
- Transparently handles gzip or zip compressed reports
@@ -61,7 +61,7 @@ for RHEL or Debian.
| 3.11 | ✅ | Actively maintained; supported until June 2028 (Debian 12) |
| 3.12 | ✅ | Actively maintained; supported until May 2035 (RHEL 10) |
| 3.13 | ✅ | Actively maintained; supported until June 2030 (Debian 13) |
| 3.14 | | Not currently supported due to [this imapclient bug](https://github.com/mjs/imapclient/issues/618)|
| 3.14 | | Actively maintained |
```{toctree}
:caption: 'Contents'

View File

@@ -74,14 +74,14 @@ the DMARC Summary dashboard. To view failures only, use the pie chart.
Any other filters work the same way. You can also add your own custom temporary
filters by clicking on Add Filter at the upper right of the page.
## DMARC Forensic Samples
## DMARC Failure Samples
The DMARC Forensic Samples dashboard contains information on DMARC forensic
reports (also known as failure reports or ruf reports). These reports contain
The DMARC Failure Samples dashboard contains information on DMARC failure
reports (also known as ruf reports). These reports contain
samples of emails that have failed to pass DMARC.
:::{note}
Most recipients do not send forensic/failure/ruf reports at all to avoid
Most recipients do not send failure/ruf reports at all to avoid
privacy leaks. Some recipients (notably Chinese webmail services) will only
supply the headers of sample emails. Very few provide the entire email.
:::

View File

@@ -96,12 +96,12 @@ draft,acme.com,noreply-dmarc-support@acme.com,http://acme.com/dmarc/support,9391
```
## Sample forensic report output
## Sample failure report output
Thanks to GitHub user [xennn](https://github.com/xennn) for the anonymized
[forensic report email sample](<https://github.com/domainaware/parsedmarc/raw/master/samples/forensic/DMARC%20Failure%20Report%20for%20domain.de%20(mail-from%3Dsharepoint%40domain.de%2C%20ip%3D10.10.10.10).eml>).
[failure report email sample](<https://github.com/domainaware/parsedmarc/raw/master/samples/forensic/DMARC%20Failure%20Report%20for%20domain.de%20(mail-from%3Dsharepoint%40domain.de%2C%20ip%3D10.10.10.10).eml>).
### JSON forensic report
### JSON failure report
```json
{
@@ -190,7 +190,7 @@ Thanks to GitHub user [xennn](https://github.com/xennn) for the anonymized
}
```
### CSV forensic report
### CSV failure report
```text
feedback_type,user_agent,version,original_envelope_id,original_mail_from,original_rcpt_to,arrival_date,arrival_date_utc,subject,message_id,authentication_results,dkim_domain,source_ip_address,source_country,source_reverse_dns,source_base_domain,delivery_result,auth_failure,reported_domain,authentication_mechanisms,sample_headers_only

View File

@@ -1,10 +1,10 @@
# Splunk
Starting in version 4.3.0 `parsedmarc` supports sending aggregate and/or
forensic DMARC data to a Splunk [HTTP Event collector (HEC)].
failure DMARC data to a Splunk [HTTP Event collector (HEC)].
The project repository contains [XML files] for premade Splunk
dashboards for aggregate and forensic DMARC reports.
dashboards for aggregate and failure DMARC reports.
Copy and paste the contents of each file into a separate Splunk
dashboard XML editor.

View File

@@ -4,9 +4,9 @@
```text
usage: parsedmarc [-h] [-c CONFIG_FILE] [--strip-attachment-payloads] [-o OUTPUT]
[--aggregate-json-filename AGGREGATE_JSON_FILENAME] [--forensic-json-filename FORENSIC_JSON_FILENAME]
[--aggregate-json-filename AGGREGATE_JSON_FILENAME] [--failure-json-filename FAILURE_JSON_FILENAME]
[--smtp-tls-json-filename SMTP_TLS_JSON_FILENAME] [--aggregate-csv-filename AGGREGATE_CSV_FILENAME]
[--forensic-csv-filename FORENSIC_CSV_FILENAME] [--smtp-tls-csv-filename SMTP_TLS_CSV_FILENAME]
[--failure-csv-filename FAILURE_CSV_FILENAME] [--smtp-tls-csv-filename SMTP_TLS_CSV_FILENAME]
[-n NAMESERVERS [NAMESERVERS ...]] [-t DNS_TIMEOUT] [--offline] [-s] [-w] [--verbose] [--debug]
[--log-file LOG_FILE] [--no-prettify-json] [-v]
[file_path ...]
@@ -14,26 +14,26 @@ usage: parsedmarc [-h] [-c CONFIG_FILE] [--strip-attachment-payloads] [-o OUTPUT
Parses DMARC reports
positional arguments:
file_path one or more paths to aggregate or forensic report files, emails, or mbox files'
file_path one or more paths to aggregate or failure report files, emails, or mbox files'
options:
-h, --help show this help message and exit
-c CONFIG_FILE, --config-file CONFIG_FILE
a path to a configuration file (--silent implied)
--strip-attachment-payloads
remove attachment payloads from forensic report output
remove attachment payloads from failure report output
-o OUTPUT, --output OUTPUT
write output files to the given directory
--aggregate-json-filename AGGREGATE_JSON_FILENAME
filename for the aggregate JSON output file
--forensic-json-filename FORENSIC_JSON_FILENAME
filename for the forensic JSON output file
--failure-json-filename FAILURE_JSON_FILENAME
filename for the failure JSON output file
--smtp-tls-json-filename SMTP_TLS_JSON_FILENAME
filename for the SMTP TLS JSON output file
--aggregate-csv-filename AGGREGATE_CSV_FILENAME
filename for the aggregate CSV output file
--forensic-csv-filename FORENSIC_CSV_FILENAME
filename for the forensic CSV output file
--failure-csv-filename FAILURE_CSV_FILENAME
filename for the failure CSV output file
--smtp-tls-csv-filename SMTP_TLS_CSV_FILENAME
filename for the SMTP TLS CSV output file
-n NAMESERVERS [NAMESERVERS ...], --nameservers NAMESERVERS [NAMESERVERS ...]
@@ -70,7 +70,7 @@ For example
[general]
save_aggregate = True
save_forensic = True
save_failure = True
[imap]
host = imap.example.com
@@ -109,7 +109,7 @@ mode = tcp
[webhook]
aggregate_url = https://aggregate_url.example.com
forensic_url = https://forensic_url.example.com
failure_url = https://failure_url.example.com
smtp_tls_url = https://smtp_tls_url.example.com
timeout = 60
```
@@ -119,7 +119,7 @@ The full set of configuration options are:
- `general`
- `save_aggregate` - bool: Save aggregate report data to
Elasticsearch, Splunk and/or S3
- `save_forensic` - bool: Save forensic report data to
- `save_failure` - bool: Save failure report data to
Elasticsearch, Splunk and/or S3
- `save_smtp_tls` - bool: Save SMTP-STS report data to
Elasticsearch, Splunk and/or S3
@@ -130,7 +130,7 @@ The full set of configuration options are:
- `output` - str: Directory to place JSON and CSV files in. This is required if you set either of the JSON output file options.
- `aggregate_json_filename` - str: filename for the aggregate
JSON output file
- `forensic_json_filename` - str: filename for the forensic
- `failure_json_filename` - str: filename for the failure
JSON output file
- `ip_db_path` - str: An optional custom path to a MMDB file
from MaxMind or DBIP
@@ -171,8 +171,8 @@ The full set of configuration options are:
- `check_timeout` - int: Number of seconds to wait for a IMAP
IDLE response or the number of seconds until the next
mail check (Default: `30`)
- `since` - str: Search for messages since certain time. (Examples: `5m|3h|2d|1w`)
Acceptable units - {"m":"minutes", "h":"hours", "d":"days", "w":"weeks"}.
- `since` - str: Search for messages since certain time. (Examples: `5m|3h|2d|1w`)
Acceptable units - {"m":"minutes", "h":"hours", "d":"days", "w":"weeks"}.
Defaults to `1d` if incorrect value is provided.
- `imap`
- `host` - str: The IMAP server hostname or IP address
@@ -229,18 +229,6 @@ The full set of configuration options are:
username, you must grant the app `Mail.ReadWrite.Shared`.
:::
:::{tip}
When configuring folder names (e.g., `reports_folder`, `archive_folder`),
you can use standard folder names like `Inbox`, `Archive`, `Sent Items`, etc.
These will be automatically mapped to Microsoft Graph's well-known folder names,
which works reliably across different mailbox locales and avoids issues with
uninitialized or shared mailboxes. Supported folder names include:
- English: Inbox, Sent Items, Deleted Items, Drafts, Junk Email, Archive, Outbox
- German: Posteingang, Gesendete Elemente, Gelöschte Elemente, Entwürfe, Junk-E-Mail, Archiv
- French: Boîte de réception, Éléments envoyés, Éléments supprimés, Brouillons, Courrier indésirable, Archives
- Spanish: Bandeja de entrada, Elementos enviados, Elementos eliminados, Borradores, Correo no deseado
:::
:::{warning}
If you are using the `ClientSecret` auth method, you need to
grant the `Mail.ReadWrite` (application) permission to the
@@ -252,7 +240,7 @@ The full set of configuration options are:
group and use that as the group id.
```powershell
New-ApplicationAccessPolicy -AccessRight RestrictAccess
New-ApplicationAccessPolicy -AccessRight RestrictAccess
-AppId "<CLIENT_ID>" -PolicyScopeGroupId "<MAILBOX>"
-Description "Restrict access to dmarc reports mailbox."
```
@@ -318,7 +306,7 @@ The full set of configuration options are:
- `skip_certificate_verification` - bool: Skip certificate
verification (not recommended)
- `aggregate_topic` - str: The Kafka topic for aggregate reports
- `forensic_topic` - str: The Kafka topic for forensic reports
- `failure_topic` - str: The Kafka topic for failure reports
- `smtp`
- `host` - str: The SMTP hostname
- `port` - int: The SMTP port (Default: `25`)
@@ -348,13 +336,65 @@ The full set of configuration options are:
- `secret_access_key` - str: The secret access key (Optional)
- `syslog`
- `server` - str: The Syslog server name or IP address
- `port` - int: The UDP port to use (Default: `514`)
- `port` - int: The port to use (Default: `514`)
- `protocol` - str: The protocol to use: `udp`, `tcp`, or `tls` (Default: `udp`)
- `cafile_path` - str: Path to CA certificate file for TLS server verification (Optional)
- `certfile_path` - str: Path to client certificate file for TLS authentication (Optional)
- `keyfile_path` - str: Path to client private key file for TLS authentication (Optional)
- `timeout` - float: Connection timeout in seconds for TCP/TLS (Default: `5.0`)
- `retry_attempts` - int: Number of retry attempts for failed connections (Default: `3`)
- `retry_delay` - int: Delay in seconds between retry attempts (Default: `5`)
**Example UDP configuration (default):**
```ini
[syslog]
server = syslog.example.com
port = 514
```
**Example TCP configuration:**
```ini
[syslog]
server = syslog.example.com
port = 6514
protocol = tcp
timeout = 10.0
retry_attempts = 5
```
**Example TLS configuration with server verification:**
```ini
[syslog]
server = syslog.example.com
port = 6514
protocol = tls
cafile_path = /path/to/ca-cert.pem
timeout = 10.0
```
**Example TLS configuration with mutual authentication:**
```ini
[syslog]
server = syslog.example.com
port = 6514
protocol = tls
cafile_path = /path/to/ca-cert.pem
certfile_path = /path/to/client-cert.pem
keyfile_path = /path/to/client-key.pem
timeout = 10.0
retry_attempts = 3
retry_delay = 5
```
- `gmail_api`
- `credentials_file` - str: Path to file containing the
credentials, None to disable (Default: `None`)
- `token_file` - str: Path to save the token file
(Default: `.token`)
:::{note}
credentials_file and token_file can be got with [quickstart](https://developers.google.com/gmail/api/quickstart/python).Please change the scope to `https://www.googleapis.com/auth/gmail.modify`.
:::
@@ -374,7 +414,7 @@ The full set of configuration options are:
- `dce` - str: The Data Collection Endpoint (DCE). Example: `https://{DCE-NAME}.{REGION}.ingest.monitor.azure.com`.
- `dcr_immutable_id` - str: The immutable ID of the Data Collection Rule (DCR)
- `dcr_aggregate_stream` - str: The stream name for aggregate reports in the DCR
- `dcr_forensic_stream` - str: The stream name for the forensic reports in the DCR
- `dcr_failure_stream` - str: The stream name for the failure reports in the DCR
- `dcr_smtp_tls_stream` - str: The stream name for the SMTP TLS reports in the DCR
:::{note}
@@ -391,7 +431,7 @@ The full set of configuration options are:
- `webhook` - Post the individual reports to a webhook url with the report as the JSON body
- `aggregate_url` - str: URL of the webhook which should receive the aggregate reports
- `forensic_url` - str: URL of the webhook which should receive the forensic reports
- `failure_url` - str: URL of the webhook which should receive the failure reports
- `smtp_tls_url` - str: URL of the webhook which should receive the smtp_tls reports
- `timeout` - int: Interval in which the webhook call should timeout
@@ -406,26 +446,26 @@ blocks DNS requests to outside resolvers.
:::
:::{note}
`save_aggregate` and `save_forensic` are separate options
because you may not want to save forensic reports
(also known as failure reports) to your Elasticsearch instance,
`save_aggregate` and `save_failure` are separate options
because you may not want to save failure reports
(formerly known as forensic reports) to your Elasticsearch instance,
particularly if you are in a highly-regulated industry that
handles sensitive data, such as healthcare or finance. If your
legitimate outgoing email fails DMARC, it is possible
that email may appear later in a forensic report.
that email may appear later in a failure report.
Forensic reports contain the original headers of an email that
Failure reports contain the original headers of an email that
failed a DMARC check, and sometimes may also include the
full message body, depending on the policy of the reporting
organization.
Most reporting organizations do not send forensic reports of any
Most reporting organizations do not send failure reports of any
kind for privacy reasons. While aggregate DMARC reports are sent
at least daily, it is normal to receive very few forensic reports.
at least daily, it is normal to receive very few failure reports.
An alternative approach is to still collect forensic/failure/ruf
An alternative approach is to still collect failure/ruf
reports in your DMARC inbox, but run `parsedmarc` with
```save_forensic = True``` manually on a separate IMAP folder (using
```save_failure = True``` manually on a separate IMAP folder (using
the ```reports_folder``` option), after you have manually moved
known samples you want to save to that folder
(e.g. malicious samples and non-sensitive legitimate samples).
@@ -454,7 +494,7 @@ Update the limit to 2k per example:
PUT _cluster/settings
{
"persistent" : {
"cluster.max_shards_per_node" : 2000
"cluster.max_shards_per_node" : 2000
}
}
```

View File

@@ -83,7 +83,7 @@
"id": 28,
"panels": [
{
"content": "# DMARC Summary\r\nAs the name suggests, this dashboard is the best place to start reviewing your aggregate DMARC data.\r\n\r\nAcross the top of the dashboard, three pie charts display the percentage of alignment pass/fail for SPF, DKIM, and DMARC. Clicking on any chart segment will filter for that value.\r\n\r\n***Note***\r\nMessages should not be considered malicious just because they failed to pass DMARC; especially if you have just started collecting data. It may be a legitimate service that needs SPF and DKIM configured correctly.\r\n\r\nStart by filtering the results to only show failed DKIM alignment. While DMARC passes if a message passes SPF or DKIM alignment, only DKIM alignment remains valid when a message is forwarded without changing the from address, which is often caused by a mailbox forwarding rule. This is because DKIM signatures are part of the message headers, whereas SPF relies on SMTP session headers.\r\n\r\nUnderneath the pie charts. you can see graphs of DMARC passage and message disposition over time.\r\n\r\nUnder the graphs you will find the most useful data tables on the dashboard. On the left, there is a list of organizations that are sending you DMARC reports. In the center, there is a list of sending servers grouped by the base domain in their reverse DNS. On the right, there is a list of email from domains, sorted by message volume.\r\n\r\nBy hovering your mouse over a data table value and using the magnifying glass icons, you can filter on or filter out different values. Start by looking at the Message Sources by Reverse DNS table. Find a sender that you recognize, such as an email marketing service, hover over it, and click on the plus (+) magnifying glass icon, to add a filter that only shows results for that sender. Now, look at the Message From Header table to the right. That shows you the domains that a sender is sending as, which might tell you which brand/business is using a particular service. With that information, you can contact them and have them set up DKIM.\r\n\r\n***Note***\r\nIf you have a lot of B2C customers, you may see a high volume of emails as your domains coming from consumer email services, such as Google/Gmail and Yahoo! This occurs when customers have mailbox rules in place that forward emails from an old account to a new account, which is why DKIM authentication is so important, as mentioned earlier. Similar patterns may be observed with businesses who send from reverse DNS addressees of parent, subsidiary, and outdated brands.\r\n\r\n***Note***\r\nYou can add your own custom temporary filters by clicking on Add Filter at the upper right of the page.\r\n\r\n# DMARC Forensic Samples\r\nThe DMARC Forensic Samples section contains information on DMARC forensic reports (also known as failure reports or ruf reports). These reports contain samples of emails that have failed to pass DMARC.\r\n\r\n***Note***\r\nMost recipients do not send forensic/failure/ruf reports at all to avoid privacy leaks. Some recipients (notably Chinese webmail services) will only supply the headers of sample emails. Very few provide the entire email.\r\n\r\n# DMARC Alignment Guide\r\nDMARC ensures that SPF and DKIM authentication mechanisms actually authenticate against the same domain that the end user sees.\r\n\r\nA message passes a DMARC check by passing DKIM or SPF, **as long as the related indicators are also in alignment.**\r\n\r\n| \t| DKIM \t| SPF \t|\r\n|-----------\t|--------------------------------------------------------------------------------------------------------------------------------------------------\t|----------------------------------------------------------------------------------------------------------------\t|\r\n| **Passing** \t| The signature in the DKIM header is validated using a public key that is published as a DNS record of the domain name specified in the signature \t| The mail server's IP address is listed in the SPF record of the domain in the SMTP envelope's mail from header \t|\r\n| **Alignment** \t| The signing domain aligns with the domain in the message's from header \t| The domain in the SMTP envelope's mail from header aligns with the domain in the message's from header \t|\r\n\r\n\r\n# Further Reading\r\n[Demystifying DMARC: A guide to preventing email spoofing](https://seanthegeek.net/459/demystifying-dmarc/amp/)\r\n\r\n[DMARC Manual](https://menainfosec.com/wp-content/uploads/2017/12/DMARC_Service_Manual.pdf)\r\n\r\n[What is “External Destination Verification”?](https://dmarcian.com/what-is-external-destination-verification/)",
"content": "# DMARC Summary\r\nAs the name suggests, this dashboard is the best place to start reviewing your aggregate DMARC data.\r\n\r\nAcross the top of the dashboard, three pie charts display the percentage of alignment pass/fail for SPF, DKIM, and DMARC. Clicking on any chart segment will filter for that value.\r\n\r\n***Note***\r\nMessages should not be considered malicious just because they failed to pass DMARC; especially if you have just started collecting data. It may be a legitimate service that needs SPF and DKIM configured correctly.\r\n\r\nStart by filtering the results to only show failed DKIM alignment. While DMARC passes if a message passes SPF or DKIM alignment, only DKIM alignment remains valid when a message is forwarded without changing the from address, which is often caused by a mailbox forwarding rule. This is because DKIM signatures are part of the message headers, whereas SPF relies on SMTP session headers.\r\n\r\nUnderneath the pie charts. you can see graphs of DMARC passage and message disposition over time.\r\n\r\nUnder the graphs you will find the most useful data tables on the dashboard. On the left, there is a list of organizations that are sending you DMARC reports. In the center, there is a list of sending servers grouped by the base domain in their reverse DNS. On the right, there is a list of email from domains, sorted by message volume.\r\n\r\nBy hovering your mouse over a data table value and using the magnifying glass icons, you can filter on or filter out different values. Start by looking at the Message Sources by Reverse DNS table. Find a sender that you recognize, such as an email marketing service, hover over it, and click on the plus (+) magnifying glass icon, to add a filter that only shows results for that sender. Now, look at the Message From Header table to the right. That shows you the domains that a sender is sending as, which might tell you which brand/business is using a particular service. With that information, you can contact them and have them set up DKIM.\r\n\r\n***Note***\r\nIf you have a lot of B2C customers, you may see a high volume of emails as your domains coming from consumer email services, such as Google/Gmail and Yahoo! This occurs when customers have mailbox rules in place that forward emails from an old account to a new account, which is why DKIM authentication is so important, as mentioned earlier. Similar patterns may be observed with businesses who send from reverse DNS addressees of parent, subsidiary, and outdated brands.\r\n\r\n***Note***\r\nYou can add your own custom temporary filters by clicking on Add Filter at the upper right of the page.\r\n\r\n# DMARC Failure Samples\r\nThe DMARC Failure Samples section contains information on DMARC failure reports (also known as ruf reports). These reports contain samples of emails that have failed to pass DMARC.\r\n\r\n***Note***\r\nMost recipients do not send failure/ruf reports at all to avoid privacy leaks. Some recipients (notably Chinese webmail services) will only supply the headers of sample emails. Very few provide the entire email.\r\n\r\n# DMARC Alignment Guide\r\nDMARC ensures that SPF and DKIM authentication mechanisms actually authenticate against the same domain that the end user sees.\r\n\r\nA message passes a DMARC check by passing DKIM or SPF, **as long as the related indicators are also in alignment.**\r\n\r\n| \t| DKIM \t| SPF \t|\r\n|-----------\t|--------------------------------------------------------------------------------------------------------------------------------------------------\t|----------------------------------------------------------------------------------------------------------------\t|\r\n| **Passing** \t| The signature in the DKIM header is validated using a public key that is published as a DNS record of the domain name specified in the signature \t| The mail server's IP address is listed in the SPF record of the domain in the SMTP envelope's mail from header \t|\r\n| **Alignment** \t| The signing domain aligns with the domain in the message's from header \t| The domain in the SMTP envelope's mail from header aligns with the domain in the message's from header \t|\r\n\r\n\r\n# Further Reading\r\n[Demystifying DMARC: A guide to preventing email spoofing](https://seanthegeek.net/459/demystifying-dmarc/amp/)\r\n\r\n[DMARC Manual](https://menainfosec.com/wp-content/uploads/2017/12/DMARC_Service_Manual.pdf)\r\n\r\n[What is “External Destination Verification”?](https://dmarcian.com/what-is-external-destination-verification/)",
"datasource": null,
"fieldConfig": {
"defaults": {
@@ -101,7 +101,7 @@
"links": [],
"mode": "markdown",
"options": {
"content": "# DMARC Summary\r\nAs the name suggests, this dashboard is the best place to start reviewing your aggregate DMARC data.\r\n\r\nAcross the top of the dashboard, three pie charts display the percentage of alignment pass/fail for SPF, DKIM, and DMARC. Clicking on any chart segment will filter for that value.\r\n\r\n***Note***\r\nMessages should not be considered malicious just because they failed to pass DMARC; especially if you have just started collecting data. It may be a legitimate service that needs SPF and DKIM configured correctly.\r\n\r\nStart by filtering the results to only show failed DKIM alignment. While DMARC passes if a message passes SPF or DKIM alignment, only DKIM alignment remains valid when a message is forwarded without changing the from address, which is often caused by a mailbox forwarding rule. This is because DKIM signatures are part of the message headers, whereas SPF relies on SMTP session headers.\r\n\r\nUnderneath the pie charts. you can see graphs of DMARC passage and message disposition over time.\r\n\r\nUnder the graphs you will find the most useful data tables on the dashboard. On the left, there is a list of organizations that are sending you DMARC reports. In the center, there is a list of sending servers grouped by the base domain in their reverse DNS. On the right, there is a list of email from domains, sorted by message volume.\r\n\r\nBy hovering your mouse over a data table value and using the magnifying glass icons, you can filter on or filter out different values. Start by looking at the Message Sources by Reverse DNS table. Find a sender that you recognize, such as an email marketing service, hover over it, and click on the plus (+) magnifying glass icon, to add a filter that only shows results for that sender. Now, look at the Message From Header table to the right. That shows you the domains that a sender is sending as, which might tell you which brand/business is using a particular service. With that information, you can contact them and have them set up DKIM.\r\n\r\n***Note***\r\nIf you have a lot of B2C customers, you may see a high volume of emails as your domains coming from consumer email services, such as Google/Gmail and Yahoo! This occurs when customers have mailbox rules in place that forward emails from an old account to a new account, which is why DKIM authentication is so important, as mentioned earlier. Similar patterns may be observed with businesses who send from reverse DNS addressees of parent, subsidiary, and outdated brands.\r\n\r\n***Note***\r\nYou can add your own custom temporary filters by clicking on Add Filter at the upper right of the page.\r\n\r\n# DMARC Forensic Samples\r\nThe DMARC Forensic Samples section contains information on DMARC forensic reports (also known as failure reports or ruf reports). These reports contain samples of emails that have failed to pass DMARC.\r\n\r\n***Note***\r\nMost recipients do not send forensic/failure/ruf reports at all to avoid privacy leaks. Some recipients (notably Chinese webmail services) will only supply the headers of sample emails. Very few provide the entire email.\r\n\r\n# DMARC Alignment Guide\r\nDMARC ensures that SPF and DKIM authentication mechanisms actually authenticate against the same domain that the end user sees.\r\n\r\nA message passes a DMARC check by passing DKIM or SPF, **as long as the related indicators are also in alignment.**\r\n\r\n| \t| DKIM \t| SPF \t|\r\n|-----------\t|--------------------------------------------------------------------------------------------------------------------------------------------------\t|----------------------------------------------------------------------------------------------------------------\t|\r\n| **Passing** \t| The signature in the DKIM header is validated using a public key that is published as a DNS record of the domain name specified in the signature \t| The mail server's IP address is listed in the SPF record of the domain in the SMTP envelope's mail from header \t|\r\n| **Alignment** \t| The signing domain aligns with the domain in the message's from header \t| The domain in the SMTP envelope's mail from header aligns with the domain in the message's from header \t|\r\n\r\n\r\n# Further Reading\r\n[Demystifying DMARC: A guide to preventing email spoofing](https://seanthegeek.net/459/demystifying-dmarc/amp/)\r\n\r\n[DMARC Manual](https://menainfosec.com/wp-content/uploads/2017/12/DMARC_Service_Manual.pdf)\r\n\r\n[What is “External Destination Verification”?](https://dmarcian.com/what-is-external-destination-verification/)",
"content": "# DMARC Summary\r\nAs the name suggests, this dashboard is the best place to start reviewing your aggregate DMARC data.\r\n\r\nAcross the top of the dashboard, three pie charts display the percentage of alignment pass/fail for SPF, DKIM, and DMARC. Clicking on any chart segment will filter for that value.\r\n\r\n***Note***\r\nMessages should not be considered malicious just because they failed to pass DMARC; especially if you have just started collecting data. It may be a legitimate service that needs SPF and DKIM configured correctly.\r\n\r\nStart by filtering the results to only show failed DKIM alignment. While DMARC passes if a message passes SPF or DKIM alignment, only DKIM alignment remains valid when a message is forwarded without changing the from address, which is often caused by a mailbox forwarding rule. This is because DKIM signatures are part of the message headers, whereas SPF relies on SMTP session headers.\r\n\r\nUnderneath the pie charts. you can see graphs of DMARC passage and message disposition over time.\r\n\r\nUnder the graphs you will find the most useful data tables on the dashboard. On the left, there is a list of organizations that are sending you DMARC reports. In the center, there is a list of sending servers grouped by the base domain in their reverse DNS. On the right, there is a list of email from domains, sorted by message volume.\r\n\r\nBy hovering your mouse over a data table value and using the magnifying glass icons, you can filter on or filter out different values. Start by looking at the Message Sources by Reverse DNS table. Find a sender that you recognize, such as an email marketing service, hover over it, and click on the plus (+) magnifying glass icon, to add a filter that only shows results for that sender. Now, look at the Message From Header table to the right. That shows you the domains that a sender is sending as, which might tell you which brand/business is using a particular service. With that information, you can contact them and have them set up DKIM.\r\n\r\n***Note***\r\nIf you have a lot of B2C customers, you may see a high volume of emails as your domains coming from consumer email services, such as Google/Gmail and Yahoo! This occurs when customers have mailbox rules in place that forward emails from an old account to a new account, which is why DKIM authentication is so important, as mentioned earlier. Similar patterns may be observed with businesses who send from reverse DNS addressees of parent, subsidiary, and outdated brands.\r\n\r\n***Note***\r\nYou can add your own custom temporary filters by clicking on Add Filter at the upper right of the page.\r\n\r\n# DMARC Failure Samples\r\nThe DMARC Failure Samples section contains information on DMARC failure reports (also known as ruf reports). These reports contain samples of emails that have failed to pass DMARC.\r\n\r\n***Note***\r\nMost recipients do not send failure/ruf reports at all to avoid privacy leaks. Some recipients (notably Chinese webmail services) will only supply the headers of sample emails. Very few provide the entire email.\r\n\r\n# DMARC Alignment Guide\r\nDMARC ensures that SPF and DKIM authentication mechanisms actually authenticate against the same domain that the end user sees.\r\n\r\nA message passes a DMARC check by passing DKIM or SPF, **as long as the related indicators are also in alignment.**\r\n\r\n| \t| DKIM \t| SPF \t|\r\n|-----------\t|--------------------------------------------------------------------------------------------------------------------------------------------------\t|----------------------------------------------------------------------------------------------------------------\t|\r\n| **Passing** \t| The signature in the DKIM header is validated using a public key that is published as a DNS record of the domain name specified in the signature \t| The mail server's IP address is listed in the SPF record of the domain in the SMTP envelope's mail from header \t|\r\n| **Alignment** \t| The signing domain aligns with the domain in the message's from header \t| The domain in the SMTP envelope's mail from header aligns with the domain in the message's from header \t|\r\n\r\n\r\n# Further Reading\r\n[Demystifying DMARC: A guide to preventing email spoofing](https://seanthegeek.net/459/demystifying-dmarc/amp/)\r\n\r\n[DMARC Manual](https://menainfosec.com/wp-content/uploads/2017/12/DMARC_Service_Manual.pdf)\r\n\r\n[What is “External Destination Verification”?](https://dmarcian.com/what-is-external-destination-verification/)",
"mode": "markdown"
},
"pluginVersion": "7.1.0",

File diff suppressed because one or more lines are too long

View File

@@ -48,6 +48,7 @@ from parsedmarc.mail import (
)
from parsedmarc.types import (
AggregateReport,
FailureReport,
ForensicReport,
ParsedReport,
ParsingResults,
@@ -73,6 +74,7 @@ text_report_regex = re.compile(r"\s*([a-zA-Z\s]+):\s(.+)", re.MULTILINE)
MAGIC_ZIP = b"\x50\x4b\x03\x04"
MAGIC_GZIP = b"\x1f\x8b"
MAGIC_XML = b"\x3c\x3f\x78\x6d\x6c\x20"
MAGIC_XML_TAG = b"\x3c" # '<' - XML starting with an element tag (no declaration)
MAGIC_JSON = b"\7b"
EMAIL_SAMPLE_CONTENT_TYPES = (
@@ -107,8 +109,12 @@ class InvalidAggregateReport(InvalidDMARCReport):
"""Raised when an invalid DMARC aggregate report is encountered"""
class InvalidForensicReport(InvalidDMARCReport):
"""Raised when an invalid DMARC forensic report is encountered"""
class InvalidFailureReport(InvalidDMARCReport):
"""Raised when an invalid DMARC failure report is encountered"""
# Backward-compatible alias
InvalidForensicReport = InvalidFailureReport
def _bucket_interval_by_day(
@@ -348,8 +354,6 @@ def _parse_report_record(
}
if "disposition" in policy_evaluated:
new_policy_evaluated["disposition"] = policy_evaluated["disposition"]
if new_policy_evaluated["disposition"].strip().lower() == "pass":
new_policy_evaluated["disposition"] = "none"
if "dkim" in policy_evaluated:
new_policy_evaluated["dkim"] = policy_evaluated["dkim"]
if "spf" in policy_evaluated:
@@ -410,6 +414,7 @@ def _parse_report_record(
new_result["result"] = result["result"]
else:
new_result["result"] = "none"
new_result["human_result"] = result.get("human_result", None)
new_record["auth_results"]["dkim"].append(new_result)
if not isinstance(auth_results["spf"], list):
@@ -425,6 +430,7 @@ def _parse_report_record(
new_result["result"] = result["result"]
else:
new_result["result"] = "none"
new_result["human_result"] = result.get("human_result", None)
new_record["auth_results"]["spf"].append(new_result)
if "envelope_from" not in new_record["identifiers"]:
@@ -751,8 +757,8 @@ def parse_aggregate_report_xml(
new_report_metadata["report_id"] = report_id
date_range = report["report_metadata"]["date_range"]
begin_ts = int(date_range["begin"])
end_ts = int(date_range["end"])
begin_ts = int(date_range["begin"].split(".")[0])
end_ts = int(date_range["end"].split(".")[0])
span_seconds = end_ts - begin_ts
normalize_timespan = span_seconds > normalize_timespan_threshold_hours * 3600
@@ -777,6 +783,10 @@ def parse_aggregate_report_xml(
else:
errors = report["report_metadata"]["error"]
new_report_metadata["errors"] = errors
generator = None
if "generator" in report_metadata:
generator = report_metadata["generator"]
new_report_metadata["generator"] = generator
new_report["report_metadata"] = new_report_metadata
records = []
policy_published = report["policy_published"]
@@ -810,6 +820,35 @@ def parse_aggregate_report_xml(
if policy_published["fo"] is not None:
fo = policy_published["fo"]
new_policy_published["fo"] = fo
np_ = None
if "np" in policy_published:
if policy_published["np"] is not None:
np_ = policy_published["np"]
if np_ not in ("none", "quarantine", "reject"):
logger.warning(
"Invalid np value: {0}".format(np_)
)
new_policy_published["np"] = np_
testing = None
if "testing" in policy_published:
if policy_published["testing"] is not None:
testing = policy_published["testing"]
if testing not in ("n", "y"):
logger.warning(
"Invalid testing value: {0}".format(testing)
)
new_policy_published["testing"] = testing
discovery_method = None
if "discovery_method" in policy_published:
if policy_published["discovery_method"] is not None:
discovery_method = policy_published["discovery_method"]
if discovery_method not in ("psl", "treewalk"):
logger.warning(
"Invalid discovery_method value: {0}".format(
discovery_method
)
)
new_policy_published["discovery_method"] = discovery_method
new_report["policy_published"] = new_policy_published
if type(report["record"]) is list:
@@ -892,7 +931,11 @@ def extract_report(content: Union[bytes, str, BinaryIO]) -> str:
try:
if isinstance(content, str):
try:
file_object = BytesIO(b64decode(content))
file_object = BytesIO(
b64decode(
content.replace("\n", "").replace("\r", ""), validate=True
)
)
except binascii.Error:
return content
header = file_object.read(6)
@@ -938,6 +981,7 @@ def extract_report(content: Union[bytes, str, BinaryIO]) -> str:
)
elif (
header[: len(MAGIC_XML)] == MAGIC_XML
or header[: len(MAGIC_XML_TAG)] == MAGIC_XML_TAG
or header[: len(MAGIC_JSON)] == MAGIC_JSON
):
report = file_object.read().decode(errors="ignore")
@@ -1061,6 +1105,11 @@ def parsed_aggregate_reports_to_csv_rows(
sp = report["policy_published"]["sp"]
pct = report["policy_published"]["pct"]
fo = report["policy_published"]["fo"]
np_ = report["policy_published"].get("np", None)
testing = report["policy_published"].get("testing", None)
discovery_method = report["policy_published"].get(
"discovery_method", None
)
report_dict: dict[str, Any] = dict(
xml_schema=xml_schema,
@@ -1079,6 +1128,9 @@ def parsed_aggregate_reports_to_csv_rows(
sp=sp,
pct=pct,
fo=fo,
np=np_,
testing=testing,
discovery_method=discovery_method,
)
for record in report["records"]:
@@ -1176,6 +1228,9 @@ def parsed_aggregate_reports_to_csv(
"sp",
"pct",
"fo",
"np",
"testing",
"discovery_method",
"source_ip_address",
"source_country",
"source_reverse_dns",
@@ -1213,7 +1268,7 @@ def parsed_aggregate_reports_to_csv(
return csv_file_object.getvalue()
def parse_forensic_report(
def parse_failure_report(
feedback_report: str,
sample: str,
msg_date: datetime,
@@ -1226,9 +1281,9 @@ def parse_forensic_report(
nameservers: Optional[list[str]] = None,
dns_timeout: float = 2.0,
strip_attachment_payloads: bool = False,
) -> ForensicReport:
) -> FailureReport:
"""
Converts a DMARC forensic report and sample to a dict
Converts a DMARC failure report and sample to a dict
Args:
feedback_report (str): A message's feedback report as a string
@@ -1243,7 +1298,7 @@ def parse_forensic_report(
(Cloudflare's public DNS resolvers by default)
dns_timeout (float): Sets the DNS timeout in seconds
strip_attachment_payloads (bool): Remove attachment payloads from
forensic report results
failure report results
Returns:
dict: A parsed report and sample
@@ -1259,7 +1314,7 @@ def parse_forensic_report(
if "arrival_date" not in parsed_report:
if msg_date is None:
raise InvalidForensicReport("Forensic sample is not a valid email")
raise InvalidFailureReport("Failure sample is not a valid email")
parsed_report["arrival_date"] = msg_date.isoformat()
if "version" not in parsed_report:
@@ -1345,27 +1400,27 @@ def parse_forensic_report(
parsed_report["sample"] = sample
parsed_report["parsed_sample"] = parsed_sample
return cast(ForensicReport, parsed_report)
return cast(FailureReport, parsed_report)
except KeyError as error:
raise InvalidForensicReport("Missing value: {0}".format(error.__str__()))
raise InvalidFailureReport("Missing value: {0}".format(error.__str__()))
except Exception as error:
raise InvalidForensicReport("Unexpected error: {0}".format(error.__str__()))
raise InvalidFailureReport("Unexpected error: {0}".format(error.__str__()))
def parsed_forensic_reports_to_csv_rows(
reports: Union[ForensicReport, list[ForensicReport]],
def parsed_failure_reports_to_csv_rows(
reports: Union[FailureReport, list[FailureReport]],
) -> list[dict[str, Any]]:
"""
Converts one or more parsed forensic reports to a list of dicts in flat CSV
Converts one or more parsed failure reports to a list of dicts in flat CSV
format
Args:
reports: A parsed forensic report or list of parsed forensic reports
reports: A parsed failure report or list of parsed failure reports
Returns:
list: Parsed forensic report data as a list of dicts in flat CSV format
list: Parsed failure report data as a list of dicts in flat CSV format
"""
if isinstance(reports, dict):
reports = [reports]
@@ -1392,18 +1447,18 @@ def parsed_forensic_reports_to_csv_rows(
return rows
def parsed_forensic_reports_to_csv(
reports: Union[ForensicReport, list[ForensicReport]],
def parsed_failure_reports_to_csv(
reports: Union[FailureReport, list[FailureReport]],
) -> str:
"""
Converts one or more parsed forensic reports to flat CSV format, including
Converts one or more parsed failure reports to flat CSV format, including
headers
Args:
reports: A parsed forensic report or list of parsed forensic reports
reports: A parsed failure report or list of parsed failure reports
Returns:
str: Parsed forensic report data in flat CSV format, including headers
str: Parsed failure report data in flat CSV format, including headers
"""
fields = [
"feedback_type",
@@ -1435,7 +1490,7 @@ def parsed_forensic_reports_to_csv(
csv_writer = DictWriter(csv_file, fieldnames=fields)
csv_writer.writeheader()
rows = parsed_forensic_reports_to_csv_rows(reports)
rows = parsed_failure_reports_to_csv_rows(reports)
for row in rows:
new_row: dict[str, Any] = {}
@@ -1473,13 +1528,13 @@ def parse_report_email(
nameservers (list): A list of one or more nameservers to use
dns_timeout (float): Sets the DNS timeout in seconds
strip_attachment_payloads (bool): Remove attachment payloads from
forensic report results
failure report results
keep_alive (callable): keep alive function
normalize_timespan_threshold_hours (float): Normalize timespans beyond this
Returns:
dict:
* ``report_type``: ``aggregate`` or ``forensic``
* ``report_type``: ``aggregate`` or ``failure``
* ``report``: The parsed report
"""
result: Optional[ParsedReport] = None
@@ -1622,7 +1677,7 @@ def parse_report_email(
if feedback_report and sample:
try:
forensic_report = parse_forensic_report(
failure_report = parse_failure_report(
feedback_report,
sample,
msg_date,
@@ -1635,17 +1690,17 @@ def parse_report_email(
dns_timeout=dns_timeout,
strip_attachment_payloads=strip_attachment_payloads,
)
except InvalidForensicReport as e:
except InvalidFailureReport as e:
error = (
'Message with subject "{0}" '
"is not a valid "
"forensic DMARC report: {1}".format(subject, e)
"failure DMARC report: {1}".format(subject, e)
)
raise InvalidForensicReport(error)
raise InvalidFailureReport(error)
except Exception as e:
raise InvalidForensicReport(e.__str__())
raise InvalidFailureReport(e.__str__())
result = {"report_type": "forensic", "report": forensic_report}
result = {"report_type": "failure", "report": failure_report}
return result
if result is None:
@@ -1669,7 +1724,7 @@ def parse_report_file(
keep_alive: Optional[Callable] = None,
normalize_timespan_threshold_hours: float = 24,
) -> ParsedReport:
"""Parses a DMARC aggregate or forensic file at the given path, a
"""Parses a DMARC aggregate or failure file at the given path, a
file-like object. or bytes
Args:
@@ -1678,7 +1733,7 @@ def parse_report_file(
(Cloudflare's public DNS resolvers by default)
dns_timeout (float): Sets the DNS timeout in seconds
strip_attachment_payloads (bool): Remove attachment payloads from
forensic report results
failure report results
ip_db_path (str): Path to a MMDB file from MaxMind or DBIP
always_use_local_files (bool): Do not download files
reverse_dns_map_path (str): Path to a reverse DNS map
@@ -1768,7 +1823,7 @@ def get_dmarc_reports_from_mbox(
(Cloudflare's public DNS resolvers by default)
dns_timeout (float): Sets the DNS timeout in seconds
strip_attachment_payloads (bool): Remove attachment payloads from
forensic report results
failure report results
always_use_local_files (bool): Do not download files
reverse_dns_map_path (str): Path to a reverse DNS map file
reverse_dns_map_url (str): URL to a reverse DNS map file
@@ -1777,11 +1832,11 @@ def get_dmarc_reports_from_mbox(
normalize_timespan_threshold_hours (float): Normalize timespans beyond this
Returns:
dict: Lists of ``aggregate_reports``, ``forensic_reports``, and ``smtp_tls_reports``
dict: Lists of ``aggregate_reports``, ``failure_reports``, and ``smtp_tls_reports``
"""
aggregate_reports: list[AggregateReport] = []
forensic_reports: list[ForensicReport] = []
failure_reports: list[FailureReport] = []
smtp_tls_reports: list[SMTPTLSReport] = []
try:
mbox = mailbox.mbox(input_)
@@ -1818,8 +1873,8 @@ def get_dmarc_reports_from_mbox(
"Skipping duplicate aggregate report "
f"from {report_org} with ID: {report_id}"
)
elif parsed_email["report_type"] == "forensic":
forensic_reports.append(parsed_email["report"])
elif parsed_email["report_type"] == "failure":
failure_reports.append(parsed_email["report"])
elif parsed_email["report_type"] == "smtp_tls":
smtp_tls_reports.append(parsed_email["report"])
except InvalidDMARCReport as error:
@@ -1828,7 +1883,7 @@ def get_dmarc_reports_from_mbox(
raise InvalidDMARCReport("Mailbox {0} does not exist".format(input_))
return {
"aggregate_reports": aggregate_reports,
"forensic_reports": forensic_reports,
"failure_reports": failure_reports,
"smtp_tls_reports": smtp_tls_reports,
}
@@ -1871,7 +1926,7 @@ def get_dmarc_reports_from_mailbox(
nameservers (list): A list of DNS nameservers to query
dns_timeout (float): Set the DNS query timeout
strip_attachment_payloads (bool): Remove attachment payloads from
forensic report results
failure report results
results (dict): Results from the previous run
batch_size (int): Number of messages to read and process before saving
(use 0 for no limit)
@@ -1882,7 +1937,7 @@ def get_dmarc_reports_from_mailbox(
normalize_timespan_threshold_hours (float): Normalize timespans beyond this
Returns:
dict: Lists of ``aggregate_reports``, ``forensic_reports``, and ``smtp_tls_reports``
dict: Lists of ``aggregate_reports``, ``failure_reports``, and ``smtp_tls_reports``
"""
if delete and test:
raise ValueError("delete and test options are mutually exclusive")
@@ -1894,25 +1949,25 @@ def get_dmarc_reports_from_mailbox(
current_time: Optional[Union[datetime, date, str]] = None
aggregate_reports: list[AggregateReport] = []
forensic_reports: list[ForensicReport] = []
failure_reports: list[FailureReport] = []
smtp_tls_reports: list[SMTPTLSReport] = []
aggregate_report_msg_uids = []
forensic_report_msg_uids = []
failure_report_msg_uids = []
smtp_tls_msg_uids = []
aggregate_reports_folder = "{0}/Aggregate".format(archive_folder)
forensic_reports_folder = "{0}/Forensic".format(archive_folder)
failure_reports_folder = "{0}/Forensic".format(archive_folder)
smtp_tls_reports_folder = "{0}/SMTP-TLS".format(archive_folder)
invalid_reports_folder = "{0}/Invalid".format(archive_folder)
if results:
aggregate_reports = results["aggregate_reports"].copy()
forensic_reports = results["forensic_reports"].copy()
failure_reports = results["failure_reports"].copy()
smtp_tls_reports = results["smtp_tls_reports"].copy()
if not test and create_folders:
connection.create_folder(archive_folder)
connection.create_folder(aggregate_reports_folder)
connection.create_folder(forensic_reports_folder)
connection.create_folder(failure_reports_folder)
connection.create_folder(smtp_tls_reports_folder)
connection.create_folder(invalid_reports_folder)
@@ -2016,9 +2071,9 @@ def get_dmarc_reports_from_mailbox(
f"Skipping duplicate aggregate report with ID: {report_id}"
)
aggregate_report_msg_uids.append(message_id)
elif parsed_email["report_type"] == "forensic":
forensic_reports.append(parsed_email["report"])
forensic_report_msg_uids.append(message_id)
elif parsed_email["report_type"] == "failure":
failure_reports.append(parsed_email["report"])
failure_report_msg_uids.append(message_id)
elif parsed_email["report_type"] == "smtp_tls":
smtp_tls_reports.append(parsed_email["report"])
smtp_tls_msg_uids.append(message_id)
@@ -2045,7 +2100,7 @@ def get_dmarc_reports_from_mailbox(
if not test:
if delete:
processed_messages = (
aggregate_report_msg_uids + forensic_report_msg_uids + smtp_tls_msg_uids
aggregate_report_msg_uids + failure_report_msg_uids + smtp_tls_msg_uids
)
number_of_processed_msgs = len(processed_messages)
@@ -2085,24 +2140,24 @@ def get_dmarc_reports_from_mailbox(
message = "Error moving message UID"
e = "{0} {1}: {2}".format(message, msg_uid, e)
logger.error("Mailbox error: {0}".format(e))
if len(forensic_report_msg_uids) > 0:
message = "Moving forensic report messages from"
if len(failure_report_msg_uids) > 0:
message = "Moving failure report messages from"
logger.debug(
"{0} {1} to {2}".format(
message, reports_folder, forensic_reports_folder
message, reports_folder, failure_reports_folder
)
)
number_of_forensic_msgs = len(forensic_report_msg_uids)
for i in range(number_of_forensic_msgs):
msg_uid = forensic_report_msg_uids[i]
number_of_failure_msgs = len(failure_report_msg_uids)
for i in range(number_of_failure_msgs):
msg_uid = failure_report_msg_uids[i]
message = "Moving message"
logger.debug(
"{0} {1} of {2}: UID {3}".format(
message, i + 1, number_of_forensic_msgs, msg_uid
message, i + 1, number_of_failure_msgs, msg_uid
)
)
try:
connection.move_message(msg_uid, forensic_reports_folder)
connection.move_message(msg_uid, failure_reports_folder)
except Exception as e:
e = "Error moving message UID {0}: {1}".format(msg_uid, e)
logger.error("Mailbox error: {0}".format(e))
@@ -2129,7 +2184,7 @@ def get_dmarc_reports_from_mailbox(
logger.error("Mailbox error: {0}".format(e))
results = {
"aggregate_reports": aggregate_reports,
"forensic_reports": forensic_reports,
"failure_reports": failure_reports,
"smtp_tls_reports": smtp_tls_reports,
}
@@ -2206,7 +2261,7 @@ def watch_inbox(
(Cloudflare's public DNS resolvers by default)
dns_timeout (float): Set the DNS query timeout
strip_attachment_payloads (bool): Replace attachment payloads in
forensic report samples with None
failure report samples with None
batch_size (int): Number of messages to read and process before saving
normalize_timespan_threshold_hours (float): Normalize timespans beyond this
"""
@@ -2239,7 +2294,7 @@ def append_json(
filename: str,
reports: Union[
Sequence[AggregateReport],
Sequence[ForensicReport],
Sequence[FailureReport],
Sequence[SMTPTLSReport],
],
) -> None:
@@ -2282,10 +2337,10 @@ def save_output(
*,
output_directory: str = "output",
aggregate_json_filename: str = "aggregate.json",
forensic_json_filename: str = "forensic.json",
failure_json_filename: str = "failure.json",
smtp_tls_json_filename: str = "smtp_tls.json",
aggregate_csv_filename: str = "aggregate.csv",
forensic_csv_filename: str = "forensic.csv",
failure_csv_filename: str = "failure.csv",
smtp_tls_csv_filename: str = "smtp_tls.csv",
):
"""
@@ -2295,15 +2350,15 @@ def save_output(
results: Parsing results
output_directory (str): The path to the directory to save in
aggregate_json_filename (str): Filename for the aggregate JSON file
forensic_json_filename (str): Filename for the forensic JSON file
failure_json_filename (str): Filename for the failure JSON file
smtp_tls_json_filename (str): Filename for the SMTP TLS JSON file
aggregate_csv_filename (str): Filename for the aggregate CSV file
forensic_csv_filename (str): Filename for the forensic CSV file
failure_csv_filename (str): Filename for the failure CSV file
smtp_tls_csv_filename (str): Filename for the SMTP TLS CSV file
"""
aggregate_reports = results["aggregate_reports"]
forensic_reports = results["forensic_reports"]
failure_reports = results["failure_reports"]
smtp_tls_reports = results["smtp_tls_reports"]
output_directory = os.path.expanduser(output_directory)
@@ -2323,12 +2378,12 @@ def save_output(
)
append_json(
os.path.join(output_directory, forensic_json_filename), forensic_reports
os.path.join(output_directory, failure_json_filename), failure_reports
)
append_csv(
os.path.join(output_directory, forensic_csv_filename),
parsed_forensic_reports_to_csv(forensic_reports),
os.path.join(output_directory, failure_csv_filename),
parsed_failure_reports_to_csv(failure_reports),
)
append_json(
@@ -2345,10 +2400,10 @@ def save_output(
os.makedirs(samples_directory)
sample_filenames = []
for forensic_report in forensic_reports:
sample = forensic_report["sample"]
for failure_report in failure_reports:
sample = failure_report["sample"]
message_count = 0
parsed_sample = forensic_report["parsed_sample"]
parsed_sample = failure_report["parsed_sample"]
subject = (
parsed_sample.get("filename_safe_subject")
or parsed_sample.get("subject")
@@ -2482,3 +2537,9 @@ def email_results(
attachments=attachments,
plain_message=message,
)
# Backward-compatible aliases
parse_forensic_report = parse_failure_report
parsed_forensic_reports_to_csv_rows = parsed_failure_reports_to_csv_rows
parsed_forensic_reports_to_csv = parsed_failure_reports_to_csv

View File

@@ -206,10 +206,10 @@ def _main():
reports_,
output_directory=opts.output,
aggregate_json_filename=opts.aggregate_json_filename,
forensic_json_filename=opts.forensic_json_filename,
failure_json_filename=opts.failure_json_filename,
smtp_tls_json_filename=opts.smtp_tls_json_filename,
aggregate_csv_filename=opts.aggregate_csv_filename,
forensic_csv_filename=opts.forensic_csv_filename,
failure_csv_filename=opts.failure_csv_filename,
smtp_tls_csv_filename=opts.smtp_tls_csv_filename,
)
if opts.save_aggregate:
@@ -301,13 +301,13 @@ def _main():
except splunk.SplunkError as e:
logger.error("Splunk HEC error: {0}".format(e.__str__()))
if opts.save_forensic:
for report in reports_["forensic_reports"]:
if opts.save_failure:
for report in reports_["failure_reports"]:
try:
shards = opts.elasticsearch_number_of_shards
replicas = opts.elasticsearch_number_of_replicas
if opts.elasticsearch_hosts:
elastic.save_forensic_report_to_elasticsearch(
elastic.save_failure_report_to_elasticsearch(
report,
index_suffix=opts.elasticsearch_index_suffix,
index_prefix=opts.elasticsearch_index_prefix
@@ -327,7 +327,7 @@ def _main():
shards = opts.opensearch_number_of_shards
replicas = opts.opensearch_number_of_replicas
if opts.opensearch_hosts:
opensearch.save_forensic_report_to_opensearch(
opensearch.save_failure_report_to_opensearch(
report,
index_suffix=opts.opensearch_index_suffix,
index_prefix=opts.opensearch_index_prefix
@@ -345,34 +345,34 @@ def _main():
try:
if opts.kafka_hosts:
kafka_client.save_forensic_reports_to_kafka(
report, kafka_forensic_topic
kafka_client.save_failure_reports_to_kafka(
report, kafka_failure_topic
)
except Exception as error_:
logger.error("Kafka Error: {0}".format(error_.__str__()))
try:
if opts.s3_bucket:
s3_client.save_forensic_report_to_s3(report)
s3_client.save_failure_report_to_s3(report)
except Exception as error_:
logger.error("S3 Error: {0}".format(error_.__str__()))
try:
if opts.syslog_server:
syslog_client.save_forensic_report_to_syslog(report)
syslog_client.save_failure_report_to_syslog(report)
except Exception as error_:
logger.error("Syslog Error: {0}".format(error_.__str__()))
try:
if opts.gelf_host:
gelf_client.save_forensic_report_to_gelf(report)
gelf_client.save_failure_report_to_gelf(report)
except Exception as error_:
logger.error("GELF Error: {0}".format(error_.__str__()))
try:
if opts.webhook_forensic_url:
if opts.webhook_failure_url:
indent_value = 2 if opts.prettify_json else None
webhook_client.save_forensic_report_to_webhook(
webhook_client.save_failure_report_to_webhook(
json.dumps(report, ensure_ascii=False, indent=indent_value)
)
except Exception as error_:
@@ -380,9 +380,9 @@ def _main():
if opts.hec:
try:
forensic_reports_ = reports_["forensic_reports"]
if len(forensic_reports_) > 0:
hec_client.save_forensic_reports_to_splunk(forensic_reports_)
failure_reports_ = reports_["failure_reports"]
if len(failure_reports_) > 0:
hec_client.save_failure_reports_to_splunk(failure_reports_)
except splunk.SplunkError as e:
logger.error("Splunk HEC error: {0}".format(e.__str__()))
@@ -480,13 +480,13 @@ def _main():
dce=opts.la_dce,
dcr_immutable_id=opts.la_dcr_immutable_id,
dcr_aggregate_stream=opts.la_dcr_aggregate_stream,
dcr_forensic_stream=opts.la_dcr_forensic_stream,
dcr_failure_stream=opts.la_dcr_failure_stream,
dcr_smtp_tls_stream=opts.la_dcr_smtp_tls_stream,
)
la_client.publish_results(
reports_,
opts.save_aggregate,
opts.save_forensic,
opts.save_failure,
opts.save_smtp_tls,
)
except loganalytics.LogAnalyticsException as e:
@@ -508,10 +508,10 @@ def _main():
arg_parser.add_argument(
"file_path",
nargs="*",
help="one or more paths to aggregate or forensic "
help="one or more paths to aggregate or failure "
"report files, emails, or mbox files'",
)
strip_attachment_help = "remove attachment payloads from forensic report output"
strip_attachment_help = "remove attachment payloads from failure report output"
arg_parser.add_argument(
"--strip-attachment-payloads", help=strip_attachment_help, action="store_true"
)
@@ -524,9 +524,9 @@ def _main():
default="aggregate.json",
)
arg_parser.add_argument(
"--forensic-json-filename",
help="filename for the forensic JSON output file",
default="forensic.json",
"--failure-json-filename",
help="filename for the failure JSON output file",
default="failure.json",
)
arg_parser.add_argument(
"--smtp-tls-json-filename",
@@ -539,9 +539,9 @@ def _main():
default="aggregate.csv",
)
arg_parser.add_argument(
"--forensic-csv-filename",
help="filename for the forensic CSV output file",
default="forensic.csv",
"--failure-csv-filename",
help="filename for the failure CSV output file",
default="failure.csv",
)
arg_parser.add_argument(
"--smtp-tls-csv-filename",
@@ -588,7 +588,7 @@ def _main():
arg_parser.add_argument("-v", "--version", action="version", version=__version__)
aggregate_reports = []
forensic_reports = []
failure_reports = []
smtp_tls_reports = []
args = arg_parser.parse_args()
@@ -603,8 +603,8 @@ def _main():
output=args.output,
aggregate_csv_filename=args.aggregate_csv_filename,
aggregate_json_filename=args.aggregate_json_filename,
forensic_csv_filename=args.forensic_csv_filename,
forensic_json_filename=args.forensic_json_filename,
failure_csv_filename=args.failure_csv_filename,
failure_json_filename=args.failure_json_filename,
smtp_tls_json_filename=args.smtp_tls_json_filename,
smtp_tls_csv_filename=args.smtp_tls_csv_filename,
nameservers=args.nameservers,
@@ -616,7 +616,7 @@ def _main():
verbose=args.verbose,
prettify_json=args.prettify_json,
save_aggregate=False,
save_forensic=False,
save_failure=False,
save_smtp_tls=False,
mailbox_reports_folder="INBOX",
mailbox_archive_folder="Archive",
@@ -675,7 +675,7 @@ def _main():
kafka_username=None,
kafka_password=None,
kafka_aggregate_topic=None,
kafka_forensic_topic=None,
kafka_failure_topic=None,
kafka_smtp_tls_topic=None,
kafka_ssl=False,
kafka_skip_certificate_verification=False,
@@ -697,6 +697,13 @@ def _main():
s3_secret_access_key=None,
syslog_server=None,
syslog_port=None,
syslog_protocol=None,
syslog_cafile_path=None,
syslog_certfile_path=None,
syslog_keyfile_path=None,
syslog_timeout=None,
syslog_retry_attempts=None,
syslog_retry_delay=None,
gmail_api_credentials_file=None,
gmail_api_token_file=None,
gmail_api_include_spam_trash=False,
@@ -717,13 +724,13 @@ def _main():
la_dce=None,
la_dcr_immutable_id=None,
la_dcr_aggregate_stream=None,
la_dcr_forensic_stream=None,
la_dcr_failure_stream=None,
la_dcr_smtp_tls_stream=None,
gelf_host=None,
gelf_port=None,
gelf_mode=None,
webhook_aggregate_url=None,
webhook_forensic_url=None,
webhook_failure_url=None,
webhook_smtp_tls_url=None,
webhook_timeout=60,
normalize_timespan_threshold_hours=24.0,
@@ -760,14 +767,18 @@ def _main():
opts.output = general_config["output"]
if "aggregate_json_filename" in general_config:
opts.aggregate_json_filename = general_config["aggregate_json_filename"]
if "forensic_json_filename" in general_config:
opts.forensic_json_filename = general_config["forensic_json_filename"]
if "failure_json_filename" in general_config:
opts.failure_json_filename = general_config["failure_json_filename"]
elif "forensic_json_filename" in general_config:
opts.failure_json_filename = general_config["forensic_json_filename"]
if "smtp_tls_json_filename" in general_config:
opts.smtp_tls_json_filename = general_config["smtp_tls_json_filename"]
if "aggregate_csv_filename" in general_config:
opts.aggregate_csv_filename = general_config["aggregate_csv_filename"]
if "forensic_csv_filename" in general_config:
opts.forensic_csv_filename = general_config["forensic_csv_filename"]
if "failure_csv_filename" in general_config:
opts.failure_csv_filename = general_config["failure_csv_filename"]
elif "forensic_csv_filename" in general_config:
opts.failure_csv_filename = general_config["forensic_csv_filename"]
if "smtp_tls_csv_filename" in general_config:
opts.smtp_tls_csv_filename = general_config["smtp_tls_csv_filename"]
if "dns_timeout" in general_config:
@@ -797,8 +808,10 @@ def _main():
exit(-1)
if "save_aggregate" in general_config:
opts.save_aggregate = bool(general_config.getboolean("save_aggregate"))
if "save_forensic" in general_config:
opts.save_forensic = bool(general_config.getboolean("save_forensic"))
if "save_failure" in general_config:
opts.save_failure = bool(general_config.getboolean("save_failure"))
elif "save_forensic" in general_config:
opts.save_failure = bool(general_config.getboolean("save_forensic"))
if "save_smtp_tls" in general_config:
opts.save_smtp_tls = bool(general_config.getboolean("save_smtp_tls"))
if "debug" in general_config:
@@ -1149,17 +1162,19 @@ def _main():
"aggregate_topic setting missing from the kafka config section"
)
exit(-1)
if "forensic_topic" in kafka_config:
opts.kafka_forensic_topic = kafka_config["forensic_topic"]
if "failure_topic" in kafka_config:
opts.kafka_failure_topic = kafka_config["failure_topic"]
elif "forensic_topic" in kafka_config:
opts.kafka_failure_topic = kafka_config["forensic_topic"]
else:
logger.critical(
"forensic_topic setting missing from the kafka config section"
"failure_topic setting missing from the kafka config section"
)
if "smtp_tls_topic" in kafka_config:
opts.kafka_smtp_tls_topic = kafka_config["smtp_tls_topic"]
else:
logger.critical(
"forensic_topic setting missing from the splunk_hec config section"
"smtp_tls_topic setting missing from the kafka config section"
)
if "smtp" in config.sections():
@@ -1239,6 +1254,28 @@ def _main():
opts.syslog_port = syslog_config["port"]
else:
opts.syslog_port = 514
if "protocol" in syslog_config:
opts.syslog_protocol = syslog_config["protocol"]
else:
opts.syslog_protocol = "udp"
if "cafile_path" in syslog_config:
opts.syslog_cafile_path = syslog_config["cafile_path"]
if "certfile_path" in syslog_config:
opts.syslog_certfile_path = syslog_config["certfile_path"]
if "keyfile_path" in syslog_config:
opts.syslog_keyfile_path = syslog_config["keyfile_path"]
if "timeout" in syslog_config:
opts.syslog_timeout = float(syslog_config["timeout"])
else:
opts.syslog_timeout = 5.0
if "retry_attempts" in syslog_config:
opts.syslog_retry_attempts = int(syslog_config["retry_attempts"])
else:
opts.syslog_retry_attempts = 3
if "retry_delay" in syslog_config:
opts.syslog_retry_delay = int(syslog_config["retry_delay"])
else:
opts.syslog_retry_delay = 5
if "gmail_api" in config.sections():
gmail_api_config = config["gmail_api"]
@@ -1276,9 +1313,13 @@ def _main():
opts.la_dcr_aggregate_stream = log_analytics_config.get(
"dcr_aggregate_stream"
)
opts.la_dcr_forensic_stream = log_analytics_config.get(
"dcr_forensic_stream"
opts.la_dcr_failure_stream = log_analytics_config.get(
"dcr_failure_stream"
)
if opts.la_dcr_failure_stream is None:
opts.la_dcr_failure_stream = log_analytics_config.get(
"dcr_forensic_stream"
)
opts.la_dcr_smtp_tls_stream = log_analytics_config.get(
"dcr_smtp_tls_stream"
)
@@ -1305,8 +1346,10 @@ def _main():
webhook_config = config["webhook"]
if "aggregate_url" in webhook_config:
opts.webhook_aggregate_url = webhook_config["aggregate_url"]
if "forensic_url" in webhook_config:
opts.webhook_forensic_url = webhook_config["forensic_url"]
if "failure_url" in webhook_config:
opts.webhook_failure_url = webhook_config["failure_url"]
elif "forensic_url" in webhook_config:
opts.webhook_failure_url = webhook_config["forensic_url"]
if "smtp_tls_url" in webhook_config:
opts.webhook_smtp_tls_url = webhook_config["smtp_tls_url"]
if "timeout" in webhook_config:
@@ -1343,21 +1386,21 @@ def _main():
logger.info("Starting parsedmarc")
if opts.save_aggregate or opts.save_forensic or opts.save_smtp_tls:
if opts.save_aggregate or opts.save_failure or opts.save_smtp_tls:
try:
if opts.elasticsearch_hosts:
es_aggregate_index = "dmarc_aggregate"
es_forensic_index = "dmarc_forensic"
es_failure_index = "dmarc_failure"
es_smtp_tls_index = "smtp_tls"
if opts.elasticsearch_index_suffix:
suffix = opts.elasticsearch_index_suffix
es_aggregate_index = "{0}_{1}".format(es_aggregate_index, suffix)
es_forensic_index = "{0}_{1}".format(es_forensic_index, suffix)
es_failure_index = "{0}_{1}".format(es_failure_index, suffix)
es_smtp_tls_index = "{0}_{1}".format(es_smtp_tls_index, suffix)
if opts.elasticsearch_index_prefix:
prefix = opts.elasticsearch_index_prefix
es_aggregate_index = "{0}{1}".format(prefix, es_aggregate_index)
es_forensic_index = "{0}{1}".format(prefix, es_forensic_index)
es_failure_index = "{0}{1}".format(prefix, es_failure_index)
es_smtp_tls_index = "{0}{1}".format(prefix, es_smtp_tls_index)
elastic_timeout_value = (
float(opts.elasticsearch_timeout)
@@ -1375,7 +1418,7 @@ def _main():
)
elastic.migrate_indexes(
aggregate_indexes=[es_aggregate_index],
forensic_indexes=[es_forensic_index],
failure_indexes=[es_failure_index],
)
except elastic.ElasticsearchError:
logger.exception("Elasticsearch Error")
@@ -1384,17 +1427,17 @@ def _main():
try:
if opts.opensearch_hosts:
os_aggregate_index = "dmarc_aggregate"
os_forensic_index = "dmarc_forensic"
os_failure_index = "dmarc_failure"
os_smtp_tls_index = "smtp_tls"
if opts.opensearch_index_suffix:
suffix = opts.opensearch_index_suffix
os_aggregate_index = "{0}_{1}".format(os_aggregate_index, suffix)
os_forensic_index = "{0}_{1}".format(os_forensic_index, suffix)
os_failure_index = "{0}_{1}".format(os_failure_index, suffix)
os_smtp_tls_index = "{0}_{1}".format(os_smtp_tls_index, suffix)
if opts.opensearch_index_prefix:
prefix = opts.opensearch_index_prefix
os_aggregate_index = "{0}{1}".format(prefix, os_aggregate_index)
os_forensic_index = "{0}{1}".format(prefix, os_forensic_index)
os_failure_index = "{0}{1}".format(prefix, os_failure_index)
os_smtp_tls_index = "{0}{1}".format(prefix, os_smtp_tls_index)
opensearch_timeout_value = (
float(opts.opensearch_timeout)
@@ -1412,7 +1455,7 @@ def _main():
)
opensearch.migrate_indexes(
aggregate_indexes=[os_aggregate_index],
forensic_indexes=[os_forensic_index],
failure_indexes=[os_failure_index],
)
except opensearch.OpenSearchError:
logger.exception("OpenSearch Error")
@@ -1436,6 +1479,17 @@ def _main():
syslog_client = syslog.SyslogClient(
server_name=opts.syslog_server,
server_port=int(opts.syslog_port),
protocol=opts.syslog_protocol or "udp",
cafile_path=opts.syslog_cafile_path,
certfile_path=opts.syslog_certfile_path,
keyfile_path=opts.syslog_keyfile_path,
timeout=opts.syslog_timeout if opts.syslog_timeout is not None else 5.0,
retry_attempts=opts.syslog_retry_attempts
if opts.syslog_retry_attempts is not None
else 3,
retry_delay=opts.syslog_retry_delay
if opts.syslog_retry_delay is not None
else 5,
)
except Exception as error_:
logger.error("Syslog Error: {0}".format(error_.__str__()))
@@ -1481,13 +1535,13 @@ def _main():
if (
opts.webhook_aggregate_url
or opts.webhook_forensic_url
or opts.webhook_failure_url
or opts.webhook_smtp_tls_url
):
try:
webhook_client = webhook.WebhookClient(
aggregate_url=opts.webhook_aggregate_url,
forensic_url=opts.webhook_forensic_url,
failure_url=opts.webhook_failure_url,
smtp_tls_url=opts.webhook_smtp_tls_url,
timeout=opts.webhook_timeout,
)
@@ -1495,7 +1549,7 @@ def _main():
logger.error("Webhook Error: {0}".format(error_.__str__()))
kafka_aggregate_topic = opts.kafka_aggregate_topic
kafka_forensic_topic = opts.kafka_forensic_topic
kafka_failure_topic = opts.kafka_failure_topic
kafka_smtp_tls_topic = opts.kafka_smtp_tls_topic
file_paths = []
@@ -1591,8 +1645,8 @@ def _main():
"Skipping duplicate aggregate report "
f"from {report_org} with ID: {report_id}"
)
elif result[0]["report_type"] == "forensic":
forensic_reports.append(result[0]["report"])
elif result[0]["report_type"] == "failure":
failure_reports.append(result[0]["report"])
elif result[0]["report_type"] == "smtp_tls":
smtp_tls_reports.append(result[0]["report"])
@@ -1616,7 +1670,7 @@ def _main():
normalize_timespan_threshold_hours=normalize_timespan_threshold_hours_value,
)
aggregate_reports += reports["aggregate_reports"]
forensic_reports += reports["forensic_reports"]
failure_reports += reports["failure_reports"]
smtp_tls_reports += reports["smtp_tls_reports"]
mailbox_connection = None
@@ -1751,7 +1805,7 @@ def _main():
)
aggregate_reports += reports["aggregate_reports"]
forensic_reports += reports["forensic_reports"]
failure_reports += reports["failure_reports"]
smtp_tls_reports += reports["smtp_tls_reports"]
except Exception:
@@ -1760,7 +1814,7 @@ def _main():
parsing_results: ParsingResults = {
"aggregate_reports": aggregate_reports,
"forensic_reports": forensic_reports,
"failure_reports": failure_reports,
"smtp_tls_reports": smtp_tls_reports,
}

View File

@@ -1,3 +1,3 @@
__version__ = "9.0.8"
__version__ = "9.1.0"
USER_AGENT = f"parsedmarc/{__version__}"

View File

@@ -13,6 +13,7 @@ from elasticsearch_dsl import (
InnerDoc,
Integer,
Ip,
Keyword,
Nested,
Object,
Search,
@@ -21,7 +22,7 @@ from elasticsearch_dsl import (
)
from elasticsearch_dsl.search import Q
from parsedmarc import InvalidForensicReport
from parsedmarc import InvalidFailureReport
from parsedmarc.log import logger
from parsedmarc.utils import human_timestamp_to_datetime
@@ -43,18 +44,23 @@ class _PublishedPolicy(InnerDoc):
sp = Text()
pct = Integer()
fo = Text()
np = Keyword()
testing = Keyword()
discovery_method = Keyword()
class _DKIMResult(InnerDoc):
domain = Text()
selector = Text()
result = Text()
human_result = Text()
class _SPFResult(InnerDoc):
domain = Text()
scope = Text()
results = Text()
human_result = Text()
class _AggregateReportDoc(Document):
@@ -90,17 +96,35 @@ class _AggregateReportDoc(Document):
envelope_to = Text()
dkim_results = Nested(_DKIMResult)
spf_results = Nested(_SPFResult)
np = Keyword()
testing = Keyword()
discovery_method = Keyword()
generator = Text()
def add_policy_override(self, type_: str, comment: str):
self.policy_overrides.append(_PolicyOverride(type=type_, comment=comment)) # pyright: ignore[reportCallIssue]
def add_dkim_result(self, domain: str, selector: str, result: _DKIMResult):
def add_dkim_result(
self, domain: str, selector: str, result: _DKIMResult,
human_result: str = None,
):
self.dkim_results.append(
_DKIMResult(domain=domain, selector=selector, result=result)
_DKIMResult(
domain=domain, selector=selector, result=result,
human_result=human_result,
)
) # pyright: ignore[reportCallIssue]
def add_spf_result(self, domain: str, scope: str, result: _SPFResult):
self.spf_results.append(_SPFResult(domain=domain, scope=scope, result=result)) # pyright: ignore[reportCallIssue]
def add_spf_result(
self, domain: str, scope: str, result: _SPFResult,
human_result: str = None,
):
self.spf_results.append(
_SPFResult(
domain=domain, scope=scope, result=result,
human_result=human_result,
)
) # pyright: ignore[reportCallIssue]
def save(self, **kwargs): # pyright: ignore[reportIncompatibleMethodOverride]
self.passed_dmarc = False
@@ -120,7 +144,7 @@ class _EmailAttachmentDoc(Document):
sha256 = Text()
class _ForensicSampleDoc(InnerDoc):
class _FailureSampleDoc(InnerDoc):
raw = Text()
headers = Object()
headers_only = Boolean()
@@ -157,9 +181,9 @@ class _ForensicSampleDoc(InnerDoc):
) # pyright: ignore[reportCallIssue]
class _ForensicReportDoc(Document):
class _FailureReportDoc(Document):
class Index:
name = "dmarc_forensic"
name = "dmarc_failure"
feedback_type = Text()
user_agent = Text()
@@ -177,7 +201,7 @@ class _ForensicReportDoc(Document):
source_auth_failures = Text()
dkim_domain = Text()
original_rcpt_to = Text()
sample = Object(_ForensicSampleDoc)
sample = Object(_FailureSampleDoc)
class _SMTPTLSFailureDetailsDoc(InnerDoc):
@@ -327,20 +351,20 @@ def create_indexes(names: list[str], settings: Optional[dict[str, Any]] = None):
def migrate_indexes(
aggregate_indexes: Optional[list[str]] = None,
forensic_indexes: Optional[list[str]] = None,
failure_indexes: Optional[list[str]] = None,
):
"""
Updates index mappings
Args:
aggregate_indexes (list): A list of aggregate index names
forensic_indexes (list): A list of forensic index names
failure_indexes (list): A list of failure index names
"""
version = 2
if aggregate_indexes is None:
aggregate_indexes = []
if forensic_indexes is None:
forensic_indexes = []
if failure_indexes is None:
failure_indexes = []
for aggregate_index_name in aggregate_indexes:
if not Index(aggregate_index_name).exists():
continue
@@ -370,7 +394,7 @@ def migrate_indexes(
reindex(connections.get_connection(), aggregate_index_name, new_index_name) # pyright: ignore[reportArgumentType]
Index(aggregate_index_name).delete()
for forensic_index in forensic_indexes:
for failure_index in failure_indexes:
pass
@@ -386,7 +410,7 @@ def save_aggregate_report_to_elasticsearch(
Saves a parsed DMARC aggregate report to Elasticsearch
Args:
aggregate_report (dict): A parsed forensic report
aggregate_report (dict): A parsed aggregate report
index_suffix (str): The suffix of the name of the index to save to
index_prefix (str): The prefix of the name of the index to save to
monthly_indexes (bool): Use monthly indexes instead of daily indexes
@@ -454,6 +478,11 @@ def save_aggregate_report_to_elasticsearch(
sp=aggregate_report["policy_published"]["sp"],
pct=aggregate_report["policy_published"]["pct"],
fo=aggregate_report["policy_published"]["fo"],
np=aggregate_report["policy_published"].get("np"),
testing=aggregate_report["policy_published"].get("testing"),
discovery_method=aggregate_report["policy_published"].get(
"discovery_method"
),
)
for record in aggregate_report["records"]:
@@ -495,6 +524,12 @@ def save_aggregate_report_to_elasticsearch(
header_from=record["identifiers"]["header_from"],
envelope_from=record["identifiers"]["envelope_from"],
envelope_to=record["identifiers"]["envelope_to"],
np=aggregate_report["policy_published"].get("np"),
testing=aggregate_report["policy_published"].get("testing"),
discovery_method=aggregate_report["policy_published"].get(
"discovery_method"
),
generator=metadata.get("generator"),
)
for override in record["policy_evaluated"]["policy_override_reasons"]:
@@ -507,6 +542,7 @@ def save_aggregate_report_to_elasticsearch(
domain=dkim_result["domain"],
selector=dkim_result["selector"],
result=dkim_result["result"],
human_result=dkim_result.get("human_result"),
)
for spf_result in record["auth_results"]["spf"]:
@@ -514,6 +550,7 @@ def save_aggregate_report_to_elasticsearch(
domain=spf_result["domain"],
scope=spf_result["scope"],
result=spf_result["result"],
human_result=spf_result.get("human_result"),
)
index = "dmarc_aggregate"
@@ -535,8 +572,8 @@ def save_aggregate_report_to_elasticsearch(
raise ElasticsearchError("Elasticsearch error: {0}".format(e.__str__()))
def save_forensic_report_to_elasticsearch(
forensic_report: dict[str, Any],
def save_failure_report_to_elasticsearch(
failure_report: dict[str, Any],
index_suffix: Optional[Any] = None,
index_prefix: Optional[str] = None,
monthly_indexes: Optional[bool] = False,
@@ -544,10 +581,10 @@ def save_forensic_report_to_elasticsearch(
number_of_replicas: int = 0,
):
"""
Saves a parsed DMARC forensic report to Elasticsearch
Saves a parsed DMARC failure report to Elasticsearch
Args:
forensic_report (dict): A parsed forensic report
failure_report (dict): A parsed failure report
index_suffix (str): The suffix of the name of the index to save to
index_prefix (str): The prefix of the name of the index to save to
monthly_indexes (bool): Use monthly indexes instead of daily
@@ -560,26 +597,31 @@ def save_forensic_report_to_elasticsearch(
AlreadySaved
"""
logger.info("Saving forensic report to Elasticsearch")
forensic_report = forensic_report.copy()
logger.info("Saving failure report to Elasticsearch")
failure_report = failure_report.copy()
sample_date = None
if forensic_report["parsed_sample"]["date"] is not None:
sample_date = forensic_report["parsed_sample"]["date"]
if failure_report["parsed_sample"]["date"] is not None:
sample_date = failure_report["parsed_sample"]["date"]
sample_date = human_timestamp_to_datetime(sample_date)
original_headers = forensic_report["parsed_sample"]["headers"]
original_headers = failure_report["parsed_sample"]["headers"]
headers: dict[str, Any] = {}
for original_header in original_headers:
headers[original_header.lower()] = original_headers[original_header]
arrival_date = human_timestamp_to_datetime(forensic_report["arrival_date_utc"])
arrival_date = human_timestamp_to_datetime(failure_report["arrival_date_utc"])
arrival_date_epoch_milliseconds = int(arrival_date.timestamp() * 1000)
if index_suffix is not None:
search_index = "dmarc_forensic_{0}*".format(index_suffix)
search_index = "dmarc_failure_{0}*,dmarc_forensic_{0}*".format(
index_suffix
)
else:
search_index = "dmarc_forensic*"
search_index = "dmarc_failure*,dmarc_forensic*"
if index_prefix is not None:
search_index = "{0}{1}".format(index_prefix, search_index)
search_index = ",".join(
"{0}{1}".format(index_prefix, part)
for part in search_index.split(",")
)
search = Search(index=search_index)
q = Q(dict(match=dict(arrival_date=arrival_date_epoch_milliseconds))) # pyright: ignore[reportArgumentType]
@@ -620,64 +662,64 @@ def save_forensic_report_to_elasticsearch(
if len(existing) > 0:
raise AlreadySaved(
"A forensic sample to {0} from {1} "
"A failure sample to {0} from {1} "
"with a subject of {2} and arrival date of {3} "
"already exists in "
"Elasticsearch".format(
to_, from_, subject, forensic_report["arrival_date_utc"]
to_, from_, subject, failure_report["arrival_date_utc"]
)
)
parsed_sample = forensic_report["parsed_sample"]
sample = _ForensicSampleDoc(
raw=forensic_report["sample"],
parsed_sample = failure_report["parsed_sample"]
sample = _FailureSampleDoc(
raw=failure_report["sample"],
headers=headers,
headers_only=forensic_report["sample_headers_only"],
headers_only=failure_report["sample_headers_only"],
date=sample_date,
subject=forensic_report["parsed_sample"]["subject"],
subject=failure_report["parsed_sample"]["subject"],
filename_safe_subject=parsed_sample["filename_safe_subject"],
body=forensic_report["parsed_sample"]["body"],
body=failure_report["parsed_sample"]["body"],
)
for address in forensic_report["parsed_sample"]["to"]:
for address in failure_report["parsed_sample"]["to"]:
sample.add_to(display_name=address["display_name"], address=address["address"])
for address in forensic_report["parsed_sample"]["reply_to"]:
for address in failure_report["parsed_sample"]["reply_to"]:
sample.add_reply_to(
display_name=address["display_name"], address=address["address"]
)
for address in forensic_report["parsed_sample"]["cc"]:
for address in failure_report["parsed_sample"]["cc"]:
sample.add_cc(display_name=address["display_name"], address=address["address"])
for address in forensic_report["parsed_sample"]["bcc"]:
for address in failure_report["parsed_sample"]["bcc"]:
sample.add_bcc(display_name=address["display_name"], address=address["address"])
for attachment in forensic_report["parsed_sample"]["attachments"]:
for attachment in failure_report["parsed_sample"]["attachments"]:
sample.add_attachment(
filename=attachment["filename"],
content_type=attachment["mail_content_type"],
sha256=attachment["sha256"],
)
try:
forensic_doc = _ForensicReportDoc(
feedback_type=forensic_report["feedback_type"],
user_agent=forensic_report["user_agent"],
version=forensic_report["version"],
original_mail_from=forensic_report["original_mail_from"],
failure_doc = _FailureReportDoc(
feedback_type=failure_report["feedback_type"],
user_agent=failure_report["user_agent"],
version=failure_report["version"],
original_mail_from=failure_report["original_mail_from"],
arrival_date=arrival_date_epoch_milliseconds,
domain=forensic_report["reported_domain"],
original_envelope_id=forensic_report["original_envelope_id"],
authentication_results=forensic_report["authentication_results"],
delivery_results=forensic_report["delivery_result"],
source_ip_address=forensic_report["source"]["ip_address"],
source_country=forensic_report["source"]["country"],
source_reverse_dns=forensic_report["source"]["reverse_dns"],
source_base_domain=forensic_report["source"]["base_domain"],
authentication_mechanisms=forensic_report["authentication_mechanisms"],
auth_failure=forensic_report["auth_failure"],
dkim_domain=forensic_report["dkim_domain"],
original_rcpt_to=forensic_report["original_rcpt_to"],
domain=failure_report["reported_domain"],
original_envelope_id=failure_report["original_envelope_id"],
authentication_results=failure_report["authentication_results"],
delivery_results=failure_report["delivery_result"],
source_ip_address=failure_report["source"]["ip_address"],
source_country=failure_report["source"]["country"],
source_reverse_dns=failure_report["source"]["reverse_dns"],
source_base_domain=failure_report["source"]["base_domain"],
authentication_mechanisms=failure_report["authentication_mechanisms"],
auth_failure=failure_report["auth_failure"],
dkim_domain=failure_report["dkim_domain"],
original_rcpt_to=failure_report["original_rcpt_to"],
sample=sample,
)
index = "dmarc_forensic"
index = "dmarc_failure"
if index_suffix:
index = "{0}_{1}".format(index, index_suffix)
if index_prefix:
@@ -691,14 +733,14 @@ def save_forensic_report_to_elasticsearch(
number_of_shards=number_of_shards, number_of_replicas=number_of_replicas
)
create_indexes([index], index_settings)
forensic_doc.meta.index = index # pyright: ignore[reportAttributeAccessIssue, reportOptionalMemberAccess]
failure_doc.meta.index = index # pyright: ignore[reportAttributeAccessIssue, reportOptionalMemberAccess]
try:
forensic_doc.save()
failure_doc.save()
except Exception as e:
raise ElasticsearchError("Elasticsearch error: {0}".format(e.__str__()))
except KeyError as e:
raise InvalidForensicReport(
"Forensic report missing required field: {0}".format(e.__str__())
raise InvalidFailureReport(
"Failure report missing required field: {0}".format(e.__str__())
)
@@ -851,3 +893,9 @@ def save_smtp_tls_report_to_elasticsearch(
smtp_tls_doc.save()
except Exception as e:
raise ElasticsearchError("Elasticsearch error: {0}".format(e.__str__()))
# Backward-compatible aliases
_ForensicSampleDoc = _FailureSampleDoc
_ForensicReportDoc = _FailureReportDoc
save_forensic_report_to_elasticsearch = save_failure_report_to_elasticsearch

View File

@@ -11,7 +11,7 @@ from pygelf import GelfTcpHandler, GelfTlsHandler, GelfUdpHandler
from parsedmarc import (
parsed_aggregate_reports_to_csv_rows,
parsed_forensic_reports_to_csv_rows,
parsed_failure_reports_to_csv_rows,
parsed_smtp_tls_reports_to_csv_rows,
)
@@ -58,14 +58,18 @@ class GelfClient(object):
log_context_data.parsedmarc = None
def save_forensic_report_to_gelf(self, forensic_reports: list[dict[str, Any]]):
rows = parsed_forensic_reports_to_csv_rows(forensic_reports)
def save_failure_report_to_gelf(self, failure_reports: list[dict[str, Any]]):
rows = parsed_failure_reports_to_csv_rows(failure_reports)
for row in rows:
log_context_data.parsedmarc = row
self.logger.info("parsedmarc forensic report")
self.logger.info("parsedmarc failure report")
def save_smtp_tls_report_to_gelf(self, smtp_tls_reports: dict[str, Any]):
rows = parsed_smtp_tls_reports_to_csv_rows(smtp_tls_reports)
for row in rows:
log_context_data.parsedmarc = row
self.logger.info("parsedmarc smtptls report")
# Backward-compatible aliases
GelfClient.save_forensic_report_to_gelf = GelfClient.save_failure_report_to_gelf

View File

@@ -139,31 +139,31 @@ class KafkaClient(object):
except Exception as e:
raise KafkaError("Kafka error: {0}".format(e.__str__()))
def save_forensic_reports_to_kafka(
def save_failure_reports_to_kafka(
self,
forensic_reports: Union[dict[str, Any], list[dict[str, Any]]],
forensic_topic: str,
failure_reports: Union[dict[str, Any], list[dict[str, Any]]],
failure_topic: str,
):
"""
Saves forensic DMARC reports to Kafka, sends individual
Saves failure DMARC reports to Kafka, sends individual
records (slices) since Kafka requires messages to be <= 1MB
by default.
Args:
forensic_reports (list): A list of forensic report dicts
failure_reports (list): A list of failure report dicts
to save to Kafka
forensic_topic (str): The name of the Kafka topic
failure_topic (str): The name of the Kafka topic
"""
if isinstance(forensic_reports, dict):
forensic_reports = [forensic_reports]
if isinstance(failure_reports, dict):
failure_reports = [failure_reports]
if len(forensic_reports) < 1:
if len(failure_reports) < 1:
return
try:
logger.debug("Saving forensic reports to Kafka")
self.producer.send(forensic_topic, forensic_reports)
logger.debug("Saving failure reports to Kafka")
self.producer.send(failure_topic, failure_reports)
except UnknownTopicOrPartitionError:
raise KafkaError("Kafka error: Unknown topic or partition on broker")
except Exception as e:
@@ -184,7 +184,7 @@ class KafkaClient(object):
by default.
Args:
smtp_tls_reports (list): A list of forensic report dicts
smtp_tls_reports (list): A list of SMTP TLS report dicts
to save to Kafka
smtp_tls_topic (str): The name of the Kafka topic
@@ -196,7 +196,7 @@ class KafkaClient(object):
return
try:
logger.debug("Saving forensic reports to Kafka")
logger.debug("Saving SMTP TLS reports to Kafka")
self.producer.send(smtp_tls_topic, smtp_tls_reports)
except UnknownTopicOrPartitionError:
raise KafkaError("Kafka error: Unknown topic or partition on broker")
@@ -206,3 +206,7 @@ class KafkaClient(object):
self.producer.flush()
except Exception as e:
raise KafkaError("Kafka error: {0}".format(e.__str__()))
# Backward-compatible aliases
KafkaClient.save_forensic_reports_to_kafka = KafkaClient.save_failure_reports_to_kafka

View File

@@ -38,9 +38,9 @@ class LogAnalyticsConfig:
The Stream name where
the Aggregate DMARC reports
need to be pushed.
dcr_forensic_stream (str):
dcr_failure_stream (str):
The Stream name where
the Forensic DMARC reports
the Failure DMARC reports
need to be pushed.
dcr_smtp_tls_stream (str):
The Stream name where
@@ -56,7 +56,7 @@ class LogAnalyticsConfig:
dce: str,
dcr_immutable_id: str,
dcr_aggregate_stream: str,
dcr_forensic_stream: str,
dcr_failure_stream: str,
dcr_smtp_tls_stream: str,
):
self.client_id = client_id
@@ -65,7 +65,7 @@ class LogAnalyticsConfig:
self.dce = dce
self.dcr_immutable_id = dcr_immutable_id
self.dcr_aggregate_stream = dcr_aggregate_stream
self.dcr_forensic_stream = dcr_forensic_stream
self.dcr_failure_stream = dcr_failure_stream
self.dcr_smtp_tls_stream = dcr_smtp_tls_stream
@@ -84,7 +84,7 @@ class LogAnalyticsClient(object):
dce: str,
dcr_immutable_id: str,
dcr_aggregate_stream: str,
dcr_forensic_stream: str,
dcr_failure_stream: str,
dcr_smtp_tls_stream: str,
):
self.conf = LogAnalyticsConfig(
@@ -94,7 +94,7 @@ class LogAnalyticsClient(object):
dce=dce,
dcr_immutable_id=dcr_immutable_id,
dcr_aggregate_stream=dcr_aggregate_stream,
dcr_forensic_stream=dcr_forensic_stream,
dcr_failure_stream=dcr_failure_stream,
dcr_smtp_tls_stream=dcr_smtp_tls_stream,
)
if (
@@ -135,7 +135,7 @@ class LogAnalyticsClient(object):
self,
results: dict[str, Any],
save_aggregate: bool,
save_forensic: bool,
save_failure: bool,
save_smtp_tls: bool,
):
"""
@@ -146,13 +146,13 @@ class LogAnalyticsClient(object):
Args:
results (list):
The DMARC reports (Aggregate & Forensic)
The DMARC reports (Aggregate & Failure)
save_aggregate (bool):
Whether Aggregate reports can be saved into Log Analytics
save_forensic (bool):
Whether Forensic reports can be saved into Log Analytics
save_failure (bool):
Whether Failure reports can be saved into Log Analytics
save_smtp_tls (bool):
Whether Forensic reports can be saved into Log Analytics
Whether Failure reports can be saved into Log Analytics
"""
conf = self.conf
credential = ClientSecretCredential(
@@ -173,16 +173,16 @@ class LogAnalyticsClient(object):
)
logger.info("Successfully pushed aggregate reports.")
if (
results["forensic_reports"]
and conf.dcr_forensic_stream
and len(results["forensic_reports"]) > 0
and save_forensic
results["failure_reports"]
and conf.dcr_failure_stream
and len(results["failure_reports"]) > 0
and save_failure
):
logger.info("Publishing forensic reports.")
logger.info("Publishing failure reports.")
self.publish_json(
results["forensic_reports"], logs_client, conf.dcr_forensic_stream
results["failure_reports"], logs_client, conf.dcr_failure_stream
)
logger.info("Successfully pushed forensic reports.")
logger.info("Successfully pushed failure reports.")
if (
results["smtp_tls_reports"]
and conf.dcr_smtp_tls_stream

View File

@@ -20,59 +20,6 @@ from msgraph.core import GraphClient
from parsedmarc.log import logger
from parsedmarc.mail.mailbox_connection import MailboxConnection
# Mapping of common folder names to Microsoft Graph well-known folder names
# This avoids the "Default folder Root not found" error on uninitialized mailboxes
WELL_KNOWN_FOLDER_MAP = {
# English names
"inbox": "inbox",
"sent items": "sentitems",
"sent": "sentitems",
"sentitems": "sentitems",
"deleted items": "deleteditems",
"deleted": "deleteditems",
"deleteditems": "deleteditems",
"trash": "deleteditems",
"drafts": "drafts",
"junk email": "junkemail",
"junk": "junkemail",
"junkemail": "junkemail",
"spam": "junkemail",
"archive": "archive",
"outbox": "outbox",
"conversation history": "conversationhistory",
"conversationhistory": "conversationhistory",
# German names
"posteingang": "inbox",
"gesendete elemente": "sentitems",
"gesendet": "sentitems",
"gelöschte elemente": "deleteditems",
"gelöscht": "deleteditems",
"entwürfe": "drafts",
"junk-e-mail": "junkemail",
"archiv": "archive",
"postausgang": "outbox",
# French names
"boîte de réception": "inbox",
"éléments envoyés": "sentitems",
"envoyés": "sentitems",
"éléments supprimés": "deleteditems",
"supprimés": "deleteditems",
"brouillons": "drafts",
"courrier indésirable": "junkemail",
"archives": "archive",
"boîte d'envoi": "outbox",
# Spanish names
"bandeja de entrada": "inbox",
"elementos enviados": "sentitems",
"enviados": "sentitems",
"elementos eliminados": "deleteditems",
"eliminados": "deleteditems",
"borradores": "drafts",
"correo no deseado": "junkemail",
"archivar": "archive",
"bandeja de salida": "outbox",
}
class AuthMethod(Enum):
DeviceCode = 1
@@ -183,13 +130,6 @@ class MSGraphConnection(MailboxConnection):
self.mailbox_name = mailbox
def create_folder(self, folder_name: str):
# Check if this is a well-known folder - they already exist and cannot be created
if "/" not in folder_name:
well_known_name = WELL_KNOWN_FOLDER_MAP.get(folder_name.lower())
if well_known_name:
logger.debug(f"Folder '{folder_name}' is a well-known folder, skipping creation")
return
sub_url = ""
path_parts = folder_name.split("/")
if len(path_parts) > 1: # Folder is a subFolder
@@ -306,12 +246,6 @@ class MSGraphConnection(MailboxConnection):
parent_folder_id = folder_id
return self._find_folder_id_with_parent(path_parts[-1], parent_folder_id)
else:
# Check if this is a well-known folder name (case-insensitive)
well_known_name = WELL_KNOWN_FOLDER_MAP.get(folder_name.lower())
if well_known_name:
# Use well-known folder name directly to avoid querying uninitialized mailboxes
logger.debug(f"Using well-known folder name '{well_known_name}' for '{folder_name}'")
return well_known_name
return self._find_folder_id_with_parent(folder_name, None)
def _find_folder_id_with_parent(

View File

@@ -12,6 +12,7 @@ from opensearchpy import (
InnerDoc,
Integer,
Ip,
Keyword,
Nested,
Object,
Q,
@@ -21,7 +22,7 @@ from opensearchpy import (
)
from opensearchpy.helpers import reindex
from parsedmarc import InvalidForensicReport
from parsedmarc import InvalidFailureReport
from parsedmarc.log import logger
from parsedmarc.utils import human_timestamp_to_datetime
@@ -43,18 +44,23 @@ class _PublishedPolicy(InnerDoc):
sp = Text()
pct = Integer()
fo = Text()
np = Keyword()
testing = Keyword()
discovery_method = Keyword()
class _DKIMResult(InnerDoc):
domain = Text()
selector = Text()
result = Text()
human_result = Text()
class _SPFResult(InnerDoc):
domain = Text()
scope = Text()
results = Text()
human_result = Text()
class _AggregateReportDoc(Document):
@@ -90,17 +96,35 @@ class _AggregateReportDoc(Document):
envelope_to = Text()
dkim_results = Nested(_DKIMResult)
spf_results = Nested(_SPFResult)
np = Keyword()
testing = Keyword()
discovery_method = Keyword()
generator = Text()
def add_policy_override(self, type_: str, comment: str):
self.policy_overrides.append(_PolicyOverride(type=type_, comment=comment))
def add_dkim_result(self, domain: str, selector: str, result: _DKIMResult):
def add_dkim_result(
self, domain: str, selector: str, result: _DKIMResult,
human_result: str = None,
):
self.dkim_results.append(
_DKIMResult(domain=domain, selector=selector, result=result)
_DKIMResult(
domain=domain, selector=selector, result=result,
human_result=human_result,
)
)
def add_spf_result(self, domain: str, scope: str, result: _SPFResult):
self.spf_results.append(_SPFResult(domain=domain, scope=scope, result=result))
def add_spf_result(
self, domain: str, scope: str, result: _SPFResult,
human_result: str = None,
):
self.spf_results.append(
_SPFResult(
domain=domain, scope=scope, result=result,
human_result=human_result,
)
)
def save(self, **kwargs): # pyright: ignore[reportIncompatibleMethodOverride]
self.passed_dmarc = False
@@ -120,7 +144,7 @@ class _EmailAttachmentDoc(Document):
sha256 = Text()
class _ForensicSampleDoc(InnerDoc):
class _FailureSampleDoc(InnerDoc):
raw = Text()
headers = Object()
headers_only = Boolean()
@@ -157,9 +181,9 @@ class _ForensicSampleDoc(InnerDoc):
)
class _ForensicReportDoc(Document):
class _FailureReportDoc(Document):
class Index:
name = "dmarc_forensic"
name = "dmarc_failure"
feedback_type = Text()
user_agent = Text()
@@ -177,7 +201,7 @@ class _ForensicReportDoc(Document):
source_auth_failures = Text()
dkim_domain = Text()
original_rcpt_to = Text()
sample = Object(_ForensicSampleDoc)
sample = Object(_FailureSampleDoc)
class _SMTPTLSFailureDetailsDoc(InnerDoc):
@@ -327,20 +351,20 @@ def create_indexes(names: list[str], settings: Optional[dict[str, Any]] = None):
def migrate_indexes(
aggregate_indexes: Optional[list[str]] = None,
forensic_indexes: Optional[list[str]] = None,
failure_indexes: Optional[list[str]] = None,
):
"""
Updates index mappings
Args:
aggregate_indexes (list): A list of aggregate index names
forensic_indexes (list): A list of forensic index names
failure_indexes (list): A list of failure index names
"""
version = 2
if aggregate_indexes is None:
aggregate_indexes = []
if forensic_indexes is None:
forensic_indexes = []
if failure_indexes is None:
failure_indexes = []
for aggregate_index_name in aggregate_indexes:
if not Index(aggregate_index_name).exists():
continue
@@ -370,7 +394,7 @@ def migrate_indexes(
reindex(connections.get_connection(), aggregate_index_name, new_index_name)
Index(aggregate_index_name).delete()
for forensic_index in forensic_indexes:
for failure_index in failure_indexes:
pass
@@ -386,7 +410,7 @@ def save_aggregate_report_to_opensearch(
Saves a parsed DMARC aggregate report to OpenSearch
Args:
aggregate_report (dict): A parsed forensic report
aggregate_report (dict): A parsed aggregate report
index_suffix (str): The suffix of the name of the index to save to
index_prefix (str): The prefix of the name of the index to save to
monthly_indexes (bool): Use monthly indexes instead of daily indexes
@@ -454,6 +478,11 @@ def save_aggregate_report_to_opensearch(
sp=aggregate_report["policy_published"]["sp"],
pct=aggregate_report["policy_published"]["pct"],
fo=aggregate_report["policy_published"]["fo"],
np=aggregate_report["policy_published"].get("np"),
testing=aggregate_report["policy_published"].get("testing"),
discovery_method=aggregate_report["policy_published"].get(
"discovery_method"
),
)
for record in aggregate_report["records"]:
@@ -495,6 +524,12 @@ def save_aggregate_report_to_opensearch(
header_from=record["identifiers"]["header_from"],
envelope_from=record["identifiers"]["envelope_from"],
envelope_to=record["identifiers"]["envelope_to"],
np=aggregate_report["policy_published"].get("np"),
testing=aggregate_report["policy_published"].get("testing"),
discovery_method=aggregate_report["policy_published"].get(
"discovery_method"
),
generator=metadata.get("generator"),
)
for override in record["policy_evaluated"]["policy_override_reasons"]:
@@ -507,6 +542,7 @@ def save_aggregate_report_to_opensearch(
domain=dkim_result["domain"],
selector=dkim_result["selector"],
result=dkim_result["result"],
human_result=dkim_result.get("human_result"),
)
for spf_result in record["auth_results"]["spf"]:
@@ -514,6 +550,7 @@ def save_aggregate_report_to_opensearch(
domain=spf_result["domain"],
scope=spf_result["scope"],
result=spf_result["result"],
human_result=spf_result.get("human_result"),
)
index = "dmarc_aggregate"
@@ -535,8 +572,8 @@ def save_aggregate_report_to_opensearch(
raise OpenSearchError("OpenSearch error: {0}".format(e.__str__()))
def save_forensic_report_to_opensearch(
forensic_report: dict[str, Any],
def save_failure_report_to_opensearch(
failure_report: dict[str, Any],
index_suffix: Optional[str] = None,
index_prefix: Optional[str] = None,
monthly_indexes: bool = False,
@@ -544,10 +581,10 @@ def save_forensic_report_to_opensearch(
number_of_replicas: int = 0,
):
"""
Saves a parsed DMARC forensic report to OpenSearch
Saves a parsed DMARC failure report to OpenSearch
Args:
forensic_report (dict): A parsed forensic report
failure_report (dict): A parsed failure report
index_suffix (str): The suffix of the name of the index to save to
index_prefix (str): The prefix of the name of the index to save to
monthly_indexes (bool): Use monthly indexes instead of daily
@@ -560,26 +597,31 @@ def save_forensic_report_to_opensearch(
AlreadySaved
"""
logger.info("Saving forensic report to OpenSearch")
forensic_report = forensic_report.copy()
logger.info("Saving failure report to OpenSearch")
failure_report = failure_report.copy()
sample_date = None
if forensic_report["parsed_sample"]["date"] is not None:
sample_date = forensic_report["parsed_sample"]["date"]
if failure_report["parsed_sample"]["date"] is not None:
sample_date = failure_report["parsed_sample"]["date"]
sample_date = human_timestamp_to_datetime(sample_date)
original_headers = forensic_report["parsed_sample"]["headers"]
original_headers = failure_report["parsed_sample"]["headers"]
headers: dict[str, Any] = {}
for original_header in original_headers:
headers[original_header.lower()] = original_headers[original_header]
arrival_date = human_timestamp_to_datetime(forensic_report["arrival_date_utc"])
arrival_date = human_timestamp_to_datetime(failure_report["arrival_date_utc"])
arrival_date_epoch_milliseconds = int(arrival_date.timestamp() * 1000)
if index_suffix is not None:
search_index = "dmarc_forensic_{0}*".format(index_suffix)
search_index = "dmarc_failure_{0}*,dmarc_forensic_{0}*".format(
index_suffix
)
else:
search_index = "dmarc_forensic*"
search_index = "dmarc_failure*,dmarc_forensic*"
if index_prefix is not None:
search_index = "{0}{1}".format(index_prefix, search_index)
search_index = ",".join(
"{0}{1}".format(index_prefix, part)
for part in search_index.split(",")
)
search = Search(index=search_index)
q = Q(dict(match=dict(arrival_date=arrival_date_epoch_milliseconds)))
@@ -620,64 +662,64 @@ def save_forensic_report_to_opensearch(
if len(existing) > 0:
raise AlreadySaved(
"A forensic sample to {0} from {1} "
"A failure sample to {0} from {1} "
"with a subject of {2} and arrival date of {3} "
"already exists in "
"OpenSearch".format(
to_, from_, subject, forensic_report["arrival_date_utc"]
to_, from_, subject, failure_report["arrival_date_utc"]
)
)
parsed_sample = forensic_report["parsed_sample"]
sample = _ForensicSampleDoc(
raw=forensic_report["sample"],
parsed_sample = failure_report["parsed_sample"]
sample = _FailureSampleDoc(
raw=failure_report["sample"],
headers=headers,
headers_only=forensic_report["sample_headers_only"],
headers_only=failure_report["sample_headers_only"],
date=sample_date,
subject=forensic_report["parsed_sample"]["subject"],
subject=failure_report["parsed_sample"]["subject"],
filename_safe_subject=parsed_sample["filename_safe_subject"],
body=forensic_report["parsed_sample"]["body"],
body=failure_report["parsed_sample"]["body"],
)
for address in forensic_report["parsed_sample"]["to"]:
for address in failure_report["parsed_sample"]["to"]:
sample.add_to(display_name=address["display_name"], address=address["address"])
for address in forensic_report["parsed_sample"]["reply_to"]:
for address in failure_report["parsed_sample"]["reply_to"]:
sample.add_reply_to(
display_name=address["display_name"], address=address["address"]
)
for address in forensic_report["parsed_sample"]["cc"]:
for address in failure_report["parsed_sample"]["cc"]:
sample.add_cc(display_name=address["display_name"], address=address["address"])
for address in forensic_report["parsed_sample"]["bcc"]:
for address in failure_report["parsed_sample"]["bcc"]:
sample.add_bcc(display_name=address["display_name"], address=address["address"])
for attachment in forensic_report["parsed_sample"]["attachments"]:
for attachment in failure_report["parsed_sample"]["attachments"]:
sample.add_attachment(
filename=attachment["filename"],
content_type=attachment["mail_content_type"],
sha256=attachment["sha256"],
)
try:
forensic_doc = _ForensicReportDoc(
feedback_type=forensic_report["feedback_type"],
user_agent=forensic_report["user_agent"],
version=forensic_report["version"],
original_mail_from=forensic_report["original_mail_from"],
failure_doc = _FailureReportDoc(
feedback_type=failure_report["feedback_type"],
user_agent=failure_report["user_agent"],
version=failure_report["version"],
original_mail_from=failure_report["original_mail_from"],
arrival_date=arrival_date_epoch_milliseconds,
domain=forensic_report["reported_domain"],
original_envelope_id=forensic_report["original_envelope_id"],
authentication_results=forensic_report["authentication_results"],
delivery_results=forensic_report["delivery_result"],
source_ip_address=forensic_report["source"]["ip_address"],
source_country=forensic_report["source"]["country"],
source_reverse_dns=forensic_report["source"]["reverse_dns"],
source_base_domain=forensic_report["source"]["base_domain"],
authentication_mechanisms=forensic_report["authentication_mechanisms"],
auth_failure=forensic_report["auth_failure"],
dkim_domain=forensic_report["dkim_domain"],
original_rcpt_to=forensic_report["original_rcpt_to"],
domain=failure_report["reported_domain"],
original_envelope_id=failure_report["original_envelope_id"],
authentication_results=failure_report["authentication_results"],
delivery_results=failure_report["delivery_result"],
source_ip_address=failure_report["source"]["ip_address"],
source_country=failure_report["source"]["country"],
source_reverse_dns=failure_report["source"]["reverse_dns"],
source_base_domain=failure_report["source"]["base_domain"],
authentication_mechanisms=failure_report["authentication_mechanisms"],
auth_failure=failure_report["auth_failure"],
dkim_domain=failure_report["dkim_domain"],
original_rcpt_to=failure_report["original_rcpt_to"],
sample=sample,
)
index = "dmarc_forensic"
index = "dmarc_failure"
if index_suffix:
index = "{0}_{1}".format(index, index_suffix)
if index_prefix:
@@ -691,14 +733,14 @@ def save_forensic_report_to_opensearch(
number_of_shards=number_of_shards, number_of_replicas=number_of_replicas
)
create_indexes([index], index_settings)
forensic_doc.meta.index = index
failure_doc.meta.index = index
try:
forensic_doc.save()
failure_doc.save()
except Exception as e:
raise OpenSearchError("OpenSearch error: {0}".format(e.__str__()))
except KeyError as e:
raise InvalidForensicReport(
"Forensic report missing required field: {0}".format(e.__str__())
raise InvalidFailureReport(
"Failure report missing required field: {0}".format(e.__str__())
)
@@ -851,3 +893,9 @@ def save_smtp_tls_report_to_opensearch(
smtp_tls_doc.save()
except Exception as e:
raise OpenSearchError("OpenSearch error: {0}".format(e.__str__()))
# Backward-compatible aliases
_ForensicSampleDoc = _FailureSampleDoc
_ForensicReportDoc = _FailureReportDoc
save_forensic_report_to_opensearch = save_failure_report_to_opensearch

View File

@@ -56,8 +56,8 @@ class S3Client(object):
def save_aggregate_report_to_s3(self, report: dict[str, Any]):
self.save_report_to_s3(report, "aggregate")
def save_forensic_report_to_s3(self, report: dict[str, Any]):
self.save_report_to_s3(report, "forensic")
def save_failure_report_to_s3(self, report: dict[str, Any]):
self.save_report_to_s3(report, "failure")
def save_smtp_tls_report_to_s3(self, report: dict[str, Any]):
self.save_report_to_s3(report, "smtp_tls")
@@ -93,3 +93,7 @@ class S3Client(object):
self.bucket.put_object(
Body=json.dumps(report), Key=object_path, Metadata=object_metadata
)
# Backward-compatible aliases
S3Client.save_forensic_report_to_s3 = S3Client.save_failure_report_to_s3

View File

@@ -134,28 +134,28 @@ class HECClient(object):
if response["code"] != 0:
raise SplunkError(response["text"])
def save_forensic_reports_to_splunk(
def save_failure_reports_to_splunk(
self,
forensic_reports: Union[list[dict[str, Any]], dict[str, Any]],
failure_reports: Union[list[dict[str, Any]], dict[str, Any]],
):
"""
Saves forensic DMARC reports to Splunk
Saves failure DMARC reports to Splunk
Args:
forensic_reports (list): A list of forensic report dictionaries
failure_reports (list): A list of failure report dictionaries
to save in Splunk
"""
logger.debug("Saving forensic reports to Splunk")
if isinstance(forensic_reports, dict):
forensic_reports = [forensic_reports]
logger.debug("Saving failure reports to Splunk")
if isinstance(failure_reports, dict):
failure_reports = [failure_reports]
if len(forensic_reports) < 1:
if len(failure_reports) < 1:
return
json_str = ""
for report in forensic_reports:
for report in failure_reports:
data = self._common_data.copy()
data["sourcetype"] = "dmarc:forensic"
data["sourcetype"] = "dmarc:failure"
timestamp = human_timestamp_to_unix_timestamp(report["arrival_date_utc"])
data["time"] = timestamp
data["event"] = report.copy()
@@ -207,3 +207,7 @@ class HECClient(object):
raise SplunkError(e.__str__())
if response["code"] != 0:
raise SplunkError(response["text"])
# Backward-compatible aliases
HECClient.save_forensic_reports_to_splunk = HECClient.save_failure_reports_to_splunk

View File

@@ -6,11 +6,14 @@ from __future__ import annotations
import json
import logging
import logging.handlers
from typing import Any
import socket
import ssl
import time
from typing import Any, Optional
from parsedmarc import (
parsed_aggregate_reports_to_csv_rows,
parsed_forensic_reports_to_csv_rows,
parsed_failure_reports_to_csv_rows,
parsed_smtp_tls_reports_to_csv_rows,
)
@@ -18,27 +21,157 @@ from parsedmarc import (
class SyslogClient(object):
"""A client for Syslog"""
def __init__(self, server_name: str, server_port: int):
def __init__(
self,
server_name: str,
server_port: int,
protocol: str = "udp",
cafile_path: Optional[str] = None,
certfile_path: Optional[str] = None,
keyfile_path: Optional[str] = None,
timeout: float = 5.0,
retry_attempts: int = 3,
retry_delay: int = 5,
):
"""
Initializes the SyslogClient
Args:
server_name (str): The Syslog server
server_port (int): The Syslog UDP port
server_port (int): The Syslog port
protocol (str): The protocol to use: "udp", "tcp", or "tls" (Default: "udp")
cafile_path (str): Path to CA certificate file for TLS server verification (Optional)
certfile_path (str): Path to client certificate file for TLS authentication (Optional)
keyfile_path (str): Path to client private key file for TLS authentication (Optional)
timeout (float): Connection timeout in seconds for TCP/TLS (Default: 5.0)
retry_attempts (int): Number of retry attempts for failed connections (Default: 3)
retry_delay (int): Delay in seconds between retry attempts (Default: 5)
"""
self.server_name = server_name
self.server_port = server_port
self.protocol = protocol.lower()
self.timeout = timeout
self.retry_attempts = retry_attempts
self.retry_delay = retry_delay
self.logger = logging.getLogger("parsedmarc_syslog")
self.logger.setLevel(logging.INFO)
log_handler = logging.handlers.SysLogHandler(address=(server_name, server_port))
# Create the appropriate syslog handler based on protocol
log_handler = self._create_syslog_handler(
server_name,
server_port,
self.protocol,
cafile_path,
certfile_path,
keyfile_path,
timeout,
retry_attempts,
retry_delay,
)
self.logger.addHandler(log_handler)
def _create_syslog_handler(
self,
server_name: str,
server_port: int,
protocol: str,
cafile_path: Optional[str],
certfile_path: Optional[str],
keyfile_path: Optional[str],
timeout: float,
retry_attempts: int,
retry_delay: int,
) -> logging.handlers.SysLogHandler:
"""
Creates a SysLogHandler with the specified protocol and TLS settings
"""
if protocol == "udp":
# UDP protocol (default, backward compatible)
return logging.handlers.SysLogHandler(
address=(server_name, server_port),
socktype=socket.SOCK_DGRAM,
)
elif protocol in ["tcp", "tls"]:
# TCP or TLS protocol with retry logic
for attempt in range(1, retry_attempts + 1):
try:
if protocol == "tcp":
# TCP without TLS
handler = logging.handlers.SysLogHandler(
address=(server_name, server_port),
socktype=socket.SOCK_STREAM,
)
# Set timeout on the socket
if hasattr(handler, "socket") and handler.socket:
handler.socket.settimeout(timeout)
return handler
else:
# TLS protocol
# Create SSL context with secure defaults
ssl_context = ssl.create_default_context()
# Explicitly set minimum TLS version to 1.2 for security
ssl_context.minimum_version = ssl.TLSVersion.TLSv1_2
# Configure server certificate verification
if cafile_path:
ssl_context.load_verify_locations(cafile=cafile_path)
# Configure client certificate authentication
if certfile_path and keyfile_path:
ssl_context.load_cert_chain(
certfile=certfile_path,
keyfile=keyfile_path,
)
elif certfile_path or keyfile_path:
# Warn if only one of the two required parameters is provided
self.logger.warning(
"Both certfile_path and keyfile_path are required for "
"client certificate authentication. Client authentication "
"will not be used."
)
# Create TCP handler first
handler = logging.handlers.SysLogHandler(
address=(server_name, server_port),
socktype=socket.SOCK_STREAM,
)
# Wrap socket with TLS
if hasattr(handler, "socket") and handler.socket:
handler.socket = ssl_context.wrap_socket(
handler.socket,
server_hostname=server_name,
)
handler.socket.settimeout(timeout)
return handler
except Exception as e:
if attempt < retry_attempts:
self.logger.warning(
f"Syslog connection attempt {attempt}/{retry_attempts} failed: {e}. "
f"Retrying in {retry_delay} seconds..."
)
time.sleep(retry_delay)
else:
self.logger.error(
f"Syslog connection failed after {retry_attempts} attempts: {e}"
)
raise
else:
raise ValueError(
f"Invalid protocol '{protocol}'. Must be 'udp', 'tcp', or 'tls'."
)
def save_aggregate_report_to_syslog(self, aggregate_reports: list[dict[str, Any]]):
rows = parsed_aggregate_reports_to_csv_rows(aggregate_reports)
for row in rows:
self.logger.info(json.dumps(row))
def save_forensic_report_to_syslog(self, forensic_reports: list[dict[str, Any]]):
rows = parsed_forensic_reports_to_csv_rows(forensic_reports)
def save_failure_report_to_syslog(self, failure_reports: list[dict[str, Any]]):
rows = parsed_failure_reports_to_csv_rows(failure_reports)
for row in rows:
self.logger.info(json.dumps(row))
@@ -46,3 +179,7 @@ class SyslogClient(object):
rows = parsed_smtp_tls_reports_to_csv_rows(smtp_tls_reports)
for row in rows:
self.logger.info(json.dumps(row))
# Backward-compatible aliases
SyslogClient.save_forensic_report_to_syslog = SyslogClient.save_failure_report_to_syslog

View File

@@ -8,7 +8,7 @@ from typing import Any, Dict, List, Literal, Optional, TypedDict, Union
# For optional keys, use total=False TypedDicts.
ReportType = Literal["aggregate", "forensic", "smtp_tls"]
ReportType = Literal["aggregate", "failure", "smtp_tls"]
class AggregateReportMetadata(TypedDict):
@@ -21,6 +21,7 @@ class AggregateReportMetadata(TypedDict):
timespan_requires_normalization: bool
original_timespan_seconds: int
errors: List[str]
generator: Optional[str]
class AggregatePolicyPublished(TypedDict):
@@ -31,6 +32,9 @@ class AggregatePolicyPublished(TypedDict):
sp: str
pct: str
fo: str
np: Optional[str]
testing: Optional[str]
discovery_method: Optional[str]
class IPSourceInfo(TypedDict):
@@ -63,12 +67,14 @@ class AggregateAuthResultDKIM(TypedDict):
domain: str
result: str
selector: str
human_result: Optional[str]
class AggregateAuthResultSPF(TypedDict):
domain: str
result: str
scope: str
human_result: Optional[str]
class AggregateAuthResults(TypedDict):
@@ -119,7 +125,7 @@ ParsedEmail = TypedDict(
"ParsedEmail",
{
# This is a lightly-specified version of mailsuite/mailparser JSON.
# It focuses on the fields parsedmarc uses in forensic handling.
# It focuses on the fields parsedmarc uses in failure report handling.
"headers": Dict[str, Any],
"subject": Optional[str],
"filename_safe_subject": Optional[str],
@@ -138,7 +144,7 @@ ParsedEmail = TypedDict(
)
class ForensicReport(TypedDict):
class FailureReport(TypedDict):
feedback_type: Optional[str]
user_agent: Optional[str]
version: Optional[str]
@@ -159,6 +165,10 @@ class ForensicReport(TypedDict):
parsed_sample: ParsedEmail
# Backward-compatible alias
ForensicReport = FailureReport
class SMTPTLSFailureDetails(TypedDict):
result_type: str
failed_session_count: int
@@ -201,9 +211,13 @@ class AggregateParsedReport(TypedDict):
report: AggregateReport
class ForensicParsedReport(TypedDict):
report_type: Literal["forensic"]
report: ForensicReport
class FailureParsedReport(TypedDict):
report_type: Literal["failure"]
report: FailureReport
# Backward-compatible alias
ForensicParsedReport = FailureParsedReport
class SMTPTLSParsedReport(TypedDict):
@@ -211,10 +225,10 @@ class SMTPTLSParsedReport(TypedDict):
report: SMTPTLSReport
ParsedReport = Union[AggregateParsedReport, ForensicParsedReport, SMTPTLSParsedReport]
ParsedReport = Union[AggregateParsedReport, FailureParsedReport, SMTPTLSParsedReport]
class ParsingResults(TypedDict):
aggregate_reports: List[AggregateReport]
forensic_reports: List[ForensicReport]
failure_reports: List[FailureReport]
smtp_tls_reports: List[SMTPTLSReport]

View File

@@ -16,7 +16,7 @@ class WebhookClient(object):
def __init__(
self,
aggregate_url: str,
forensic_url: str,
failure_url: str,
smtp_tls_url: str,
timeout: Optional[int] = 60,
):
@@ -24,12 +24,12 @@ class WebhookClient(object):
Initializes the WebhookClient
Args:
aggregate_url (str): The aggregate report webhook url
forensic_url (str): The forensic report webhook url
failure_url (str): The failure report webhook url
smtp_tls_url (str): The smtp_tls report webhook url
timeout (int): The timeout to use when calling the webhooks
"""
self.aggregate_url = aggregate_url
self.forensic_url = forensic_url
self.failure_url = failure_url
self.smtp_tls_url = smtp_tls_url
self.timeout = timeout
self.session = requests.Session()
@@ -38,9 +38,9 @@ class WebhookClient(object):
"Content-Type": "application/json",
}
def save_forensic_report_to_webhook(self, report: str):
def save_failure_report_to_webhook(self, report: str):
try:
self._send_to_webhook(self.forensic_url, report)
self._send_to_webhook(self.failure_url, report)
except Exception as error_:
logger.error("Webhook Error: {0}".format(error_.__str__()))
@@ -63,3 +63,7 @@ class WebhookClient(object):
self.session.post(webhook_url, data=payload, timeout=self.timeout)
except Exception as error_:
logger.error("Webhook Error: {0}".format(error_.__str__()))
# Backward-compatible aliases
WebhookClient.save_forensic_report_to_webhook = WebhookClient.save_failure_report_to_webhook

View File

@@ -29,7 +29,7 @@ classifiers = [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
]
requires-python = ">=3.9, <3.14"
requires-python = ">=3.9"
dependencies = [
"azure-identity>=1.8.0",
"azure-monitor-ingestion>=1.0.0",
@@ -48,7 +48,7 @@ dependencies = [
"imapclient>=2.1.0",
"kafka-python-ng>=2.2.2",
"lxml>=4.4.0",
"mailsuite>=1.11.1",
"mailsuite>=1.11.2",
"msgraph-core==0.2.2",
"opensearch-py>=2.4.2,<=3.0.0",
"publicsuffixlist>=0.10.0",

View File

@@ -0,0 +1,48 @@
<feedback xmlns="urn:ietf:params:xml:ns:dmarc-2.0">
<version>1.0</version>
<report_metadata>
<org_name>Sample Reporter</org_name>
<email>report_sender@example-reporter.com</email>
<extra_contact_info>...</extra_contact_info>
<report_id>3v98abbp8ya9n3va8yr8oa3ya</report_id>
<date_range>
<begin>302832000</begin>
<end>302918399</end>
</date_range>
<generator>Example DMARC Aggregate Reporter v1.2</generator>
</report_metadata>
<policy_published>
<domain>example.com</domain>
<p>quarantine</p>
<sp>none</sp>
<np>none</np>
<testing>n</testing>
<discovery_method>treewalk</discovery_method>
</policy_published>
<record>
<row>
<source_ip>192.0.2.123</source_ip>
<count>123</count>
<policy_evaluated>
<disposition>pass</disposition>
<dkim>pass</dkim>
<spf>fail</spf>
</policy_evaluated>
</row>
<identifiers>
<envelope_from>example.com</envelope_from>
<header_from>example.com</header_from>
</identifiers>
<auth_results>
<dkim>
<domain>example.com</domain>
<result>pass</result>
<selector>abc123</selector>
</dkim>
<spf>
<domain>example.com</domain>
<result>fail</result>
</spf>
</auth_results>
</record>
</feedback>

View File

@@ -0,0 +1,77 @@
<?xml version="1.0"?>
<feedback>
<version>2.0</version>
<report_metadata>
<org_name>example.net</org_name>
<email>postmaster@example.net</email>
<report_id>dmarcbis-test-report-001</report_id>
<date_range>
<begin>1700000000</begin>
<end>1700086399</end>
</date_range>
</report_metadata>
<policy_published>
<domain>example.com</domain>
<adkim>s</adkim>
<aspf>s</aspf>
<p>reject</p>
<sp>quarantine</sp>
<np>reject</np>
<testing>y</testing>
<discovery_method>treewalk</discovery_method>
<fo>1</fo>
</policy_published>
<record>
<row>
<source_ip>198.51.100.1</source_ip>
<count>5</count>
<policy_evaluated>
<disposition>none</disposition>
<dkim>pass</dkim>
<spf>pass</spf>
</policy_evaluated>
</row>
<identifiers>
<envelope_from>example.com</envelope_from>
<header_from>example.com</header_from>
</identifiers>
<auth_results>
<dkim>
<domain>example.com</domain>
<selector>selector1</selector>
<result>pass</result>
</dkim>
<spf>
<domain>example.com</domain>
<scope>mfrom</scope>
<result>pass</result>
</spf>
</auth_results>
</record>
<record>
<row>
<source_ip>203.0.113.10</source_ip>
<count>2</count>
<policy_evaluated>
<disposition>reject</disposition>
<dkim>fail</dkim>
<spf>fail</spf>
<reason>
<type>other</type>
<comment>sender not authorized</comment>
</reason>
</policy_evaluated>
</row>
<identifiers>
<envelope_from>spoofed.example.com</envelope_from>
<header_from>example.com</header_from>
</identifiers>
<auth_results>
<spf>
<domain>spoofed.example.com</domain>
<scope>mfrom</scope>
<result>fail</result>
</spf>
</auth_results>
</record>
</feedback>

View File

@@ -60,10 +60,10 @@ Create Dashboards
9. Click Save
10. Click Dashboards
11. Click Create New Dashboard
12. Use a descriptive title, such as "Forensic DMARC Data"
12. Use a descriptive title, such as "Failure DMARC Data"
13. Click Create Dashboard
14. Click on the Source button
15. Paste the content of ''dmarc_forensic_dashboard.xml`` into the source editor
15. Paste the content of ''dmarc_failure_dashboard.xml`` into the source editor
16. If the index storing the DMARC data is not named email, replace index="email" accordingly
17. Click Save

View File

@@ -1,8 +1,8 @@
<form theme="dark" version="1.1">
<label>Forensic DMARC Data</label>
<label>Failure DMARC Data</label>
<search id="base_search">
<query>
index="email" sourcetype="dmarc:forensic" parsed_sample.headers.From=$header_from$ parsed_sample.headers.To=$header_to$ parsed_sample.headers.Subject=$header_subject$ source.ip_address=$source_ip_address$ source.reverse_dns=$source_reverse_dns$ source.country=$source_country$
index="email" (sourcetype="dmarc:failure" OR sourcetype="dmarc:forensic") parsed_sample.headers.From=$header_from$ parsed_sample.headers.To=$header_to$ parsed_sample.headers.Subject=$header_subject$ source.ip_address=$source_ip_address$ source.reverse_dns=$source_reverse_dns$ source.country=$source_country$
| table *
</query>
<earliest>$time_range.earliest$</earliest>
@@ -43,7 +43,7 @@
</fieldset>
<row>
<panel>
<title>Forensic samples</title>
<title>Failure samples</title>
<table>
<search base="base_search">
<query>| table arrival_date_utc authentication_results parsed_sample.headers.From,parsed_sample.headers.To,parsed_sample.headers.Subject | sort -arrival_date_utc</query>
@@ -59,7 +59,7 @@
</row>
<row>
<panel>
<title>Forensic samples by country</title>
<title>Failure samples by country</title>
<map>
<search base="base_search">
<query>| iplocation source.ip_address| stats count by Country | geom geo_countries featureIdField="Country"</query>
@@ -72,7 +72,7 @@
</row>
<row>
<panel>
<title>Forensic samples by IP address</title>
<title>Failure samples by IP address</title>
<table>
<search base="base_search">
<query>| iplocation source.ip_address | stats count by source.ip_address,source.reverse_dns | sort -count</query>
@@ -85,7 +85,7 @@
</table>
</panel>
<panel>
<title>Forensic samples by country ISO code</title>
<title>Failure samples by country ISO code</title>
<table>
<search base="base_search">
<query>| stats count by source.country | sort - count</query>

197
tests.py
View File

@@ -12,6 +12,9 @@ from lxml import etree
import parsedmarc
import parsedmarc.utils
# Detect if running in GitHub Actions to skip DNS lookups
OFFLINE_MODE = os.environ.get("GITHUB_ACTIONS", "false").lower() == "true"
def minify_xml(xml_string):
parser = etree.XMLParser(remove_blank_text=True)
@@ -121,7 +124,7 @@ class Test(unittest.TestCase):
continue
print("Testing {0}: ".format(sample_path), end="")
parsed_report = parsedmarc.parse_report_file(
sample_path, always_use_local_files=True
sample_path, always_use_local_files=True, offline=OFFLINE_MODE
)["report"]
parsedmarc.parsed_aggregate_reports_to_csv(parsed_report)
print("Passed!")
@@ -129,21 +132,173 @@ class Test(unittest.TestCase):
def testEmptySample(self):
"""Test empty/unparasable report"""
with self.assertRaises(parsedmarc.ParserError):
parsedmarc.parse_report_file("samples/empty.xml")
parsedmarc.parse_report_file("samples/empty.xml", offline=OFFLINE_MODE)
def testForensicSamples(self):
"""Test sample forensic/ruf/failure DMARC reports"""
"""Test sample failure/ruf DMARC reports"""
print()
sample_paths = glob("samples/forensic/*.eml")
for sample_path in sample_paths:
print("Testing {0}: ".format(sample_path), end="")
with open(sample_path) as sample_file:
sample_content = sample_file.read()
parsed_report = parsedmarc.parse_report_email(sample_content)["report"]
parsed_report = parsedmarc.parse_report_file(sample_path)["report"]
parsedmarc.parsed_forensic_reports_to_csv(parsed_report)
parsed_report = parsedmarc.parse_report_email(
sample_content, offline=OFFLINE_MODE
)["report"]
parsed_report = parsedmarc.parse_report_file(
sample_path, offline=OFFLINE_MODE
)["report"]
parsedmarc.parsed_failure_reports_to_csv(parsed_report)
print("Passed!")
def testFailureReportBackwardCompat(self):
"""Test that old forensic function aliases still work"""
self.assertIs(
parsedmarc.parse_forensic_report,
parsedmarc.parse_failure_report,
)
self.assertIs(
parsedmarc.parsed_forensic_reports_to_csv,
parsedmarc.parsed_failure_reports_to_csv,
)
self.assertIs(
parsedmarc.parsed_forensic_reports_to_csv_rows,
parsedmarc.parsed_failure_reports_to_csv_rows,
)
self.assertIs(
parsedmarc.InvalidForensicReport,
parsedmarc.InvalidFailureReport,
)
def testDMARCbisDraftSample(self):
"""Test parsing the sample report from the DMARCbis aggregate draft"""
print()
sample_path = (
"samples/aggregate/dmarcbis-draft-sample.xml"
)
print("Testing {0}: ".format(sample_path), end="")
result = parsedmarc.parse_report_file(
sample_path, always_use_local_files=True, offline=True
)
report = result["report"]
# Verify report_type
self.assertEqual(result["report_type"], "aggregate")
# Verify xml_schema
self.assertEqual(report["xml_schema"], "1.0")
# Verify report_metadata
metadata = report["report_metadata"]
self.assertEqual(metadata["org_name"], "Sample Reporter")
self.assertEqual(
metadata["org_email"], "report_sender@example-reporter.com"
)
self.assertEqual(
metadata["org_extra_contact_info"], "..."
)
self.assertEqual(
metadata["report_id"], "3v98abbp8ya9n3va8yr8oa3ya"
)
self.assertEqual(
metadata["generator"],
"Example DMARC Aggregate Reporter v1.2",
)
# Verify DMARCbis policy_published fields
pp = report["policy_published"]
self.assertEqual(pp["domain"], "example.com")
self.assertEqual(pp["p"], "quarantine")
self.assertEqual(pp["sp"], "none")
self.assertEqual(pp["np"], "none")
self.assertEqual(pp["testing"], "n")
self.assertEqual(pp["discovery_method"], "treewalk")
# adkim/aspf/pct/fo default when not in XML
self.assertEqual(pp["adkim"], "r")
self.assertEqual(pp["aspf"], "r")
self.assertEqual(pp["pct"], "100")
self.assertEqual(pp["fo"], "0")
# Verify record
self.assertEqual(len(report["records"]), 1)
rec = report["records"][0]
self.assertEqual(rec["source"]["ip_address"], "192.0.2.123")
self.assertEqual(rec["count"], 123)
self.assertEqual(
rec["policy_evaluated"]["disposition"], "pass"
)
self.assertEqual(rec["policy_evaluated"]["dkim"], "pass")
self.assertEqual(rec["policy_evaluated"]["spf"], "fail")
# Verify DKIM auth result with human_result
self.assertEqual(len(rec["auth_results"]["dkim"]), 1)
dkim = rec["auth_results"]["dkim"][0]
self.assertEqual(dkim["domain"], "example.com")
self.assertEqual(dkim["selector"], "abc123")
self.assertEqual(dkim["result"], "pass")
self.assertIsNone(dkim["human_result"])
# Verify SPF auth result with human_result
self.assertEqual(len(rec["auth_results"]["spf"]), 1)
spf = rec["auth_results"]["spf"][0]
self.assertEqual(spf["domain"], "example.com")
self.assertEqual(spf["result"], "fail")
self.assertIsNone(spf["human_result"])
# Verify CSV output includes new fields
csv = parsedmarc.parsed_aggregate_reports_to_csv(report)
header = csv.split("\n")[0]
self.assertIn("np", header.split(","))
self.assertIn("testing", header.split(","))
self.assertIn("discovery_method", header.split(","))
print("Passed!")
def testDMARCbisFieldsWithRFC7489(self):
"""Test that RFC 7489 reports have None for DMARCbis-only fields"""
print()
sample_path = (
"samples/aggregate/"
"example.net!example.com!1529366400!1529452799.xml"
)
print("Testing {0}: ".format(sample_path), end="")
result = parsedmarc.parse_report_file(
sample_path, always_use_local_files=True, offline=True
)
report = result["report"]
pp = report["policy_published"]
# RFC 7489 fields present
self.assertEqual(pp["pct"], "100")
self.assertEqual(pp["fo"], "0")
# DMARCbis fields absent (None)
self.assertIsNone(pp["np"])
self.assertIsNone(pp["testing"])
self.assertIsNone(pp["discovery_method"])
# generator absent (None)
self.assertIsNone(report["report_metadata"]["generator"])
print("Passed!")
def testDMARCbisWithExplicitFields(self):
"""Test DMARCbis report with explicit testing and discovery_method"""
print()
sample_path = (
"samples/aggregate/"
"dmarcbis-example.net!example.com!1700000000!1700086399.xml"
)
print("Testing {0}: ".format(sample_path), end="")
result = parsedmarc.parse_report_file(
sample_path, always_use_local_files=True, offline=True
)
report = result["report"]
pp = report["policy_published"]
self.assertEqual(pp["np"], "reject")
self.assertEqual(pp["testing"], "y")
self.assertEqual(pp["discovery_method"], "treewalk")
print("Passed!")
def testSmtpTlsSamples(self):
"""Test sample SMTP TLS reports"""
print()
@@ -152,36 +307,12 @@ class Test(unittest.TestCase):
if os.path.isdir(sample_path):
continue
print("Testing {0}: ".format(sample_path), end="")
parsed_report = parsedmarc.parse_report_file(sample_path)["report"]
parsed_report = parsedmarc.parse_report_file(
sample_path, offline=OFFLINE_MODE
)["report"]
parsedmarc.parsed_smtp_tls_reports_to_csv(parsed_report)
print("Passed!")
def testMSGraphWellKnownFolders(self):
"""Test MSGraph well-known folder name mapping"""
from parsedmarc.mail.graph import WELL_KNOWN_FOLDER_MAP
# Test English folder names
assert WELL_KNOWN_FOLDER_MAP.get("inbox") == "inbox"
assert WELL_KNOWN_FOLDER_MAP.get("sent items") == "sentitems"
assert WELL_KNOWN_FOLDER_MAP.get("deleted items") == "deleteditems"
assert WELL_KNOWN_FOLDER_MAP.get("archive") == "archive"
# Test case insensitivity - simulating how the code actually uses it
# This is what happens when user config has "reports_folder = Inbox"
assert WELL_KNOWN_FOLDER_MAP.get("inbox") == "inbox"
assert WELL_KNOWN_FOLDER_MAP.get("Inbox".lower()) == "inbox" # User's exact config
assert WELL_KNOWN_FOLDER_MAP.get("INBOX".lower()) == "inbox"
assert WELL_KNOWN_FOLDER_MAP.get("Archive".lower()) == "archive"
# Test German folder names
assert WELL_KNOWN_FOLDER_MAP.get("posteingang") == "inbox"
assert WELL_KNOWN_FOLDER_MAP.get("Posteingang".lower()) == "inbox" # Capitalized
assert WELL_KNOWN_FOLDER_MAP.get("archiv") == "archive"
# Test that custom folders don't match
assert WELL_KNOWN_FOLDER_MAP.get("custom_folder") is None
assert WELL_KNOWN_FOLDER_MAP.get("my_reports") is None
if __name__ == "__main__":
unittest.main(verbosity=2)