Skip to content

Latest commit

 

History

History
487 lines (469 loc) · 43.1 KB

1101-network-detection-of-adversarial-campaigns.md

File metadata and controls

487 lines (469 loc) · 43.1 KB
challenge-title layout permalink challenge-id status sidenav card-image agency-logo tagline agency partner-agencies-federal partners-non-federal external-url total-prize-offered-cash type-of-challenge submission-start submission-end fiscal-year legal-authority challenge-manager challenge-manager-email point-of-contact description prizes rules judging how-to-enter prize submission-link
Network Detection of Adversarial Campaigns using Artificial Intelligence and Machine Learning
front-matter-data
/challenge/network-detection-of-adversarial-campaigns/
1101
closed
true
/assets/images/cards/AI-ATAC-AI-logo.png
dod_seal.jpg
Adversarial campaign detection utilizing state-of-the-art AI/ML at the network layer
Department of Defense - Naval Information Warfare Systems Command
Department of Energy, Oak Ridge National Lab, Cybersecurity Research Group
$500,000
Software and apps; Technology demonstration and hardware; Analytics, visualizations, algorithms
03/02/20202 05:00 PM
06/30/2020 05:00 PM
FY20, FY21
Agency specific prize authority
Michael Karlbom
<p>See <strong><a href="{{ site.baseurl }}/assets/document-library/AI-ATAC-2-FAQ-23JUN20.pdf" target="_blank" rel="noopener">FREQUENTLY ASKED QUESTIONS</a>.</strong></p><p>The Naval Information Warfare Systems Command (NAVWARSYSCOM) and the Program Executive Office for Command, Control, Communications, Computers, and Intelligence (PEO C4I) are conducting a second instance of the Artificial Intelligence Applications to Autonomous Cybersecurity (AI ATAC), pronounced &ldquo;AI attack&rdquo; Challenge (hereinafter referred to as &ldquo;the Challenge&rdquo;). The Navy&rsquo;s Cybersecurity Program Office (PMW 130) seeks to automate the Security Operations Center (SOC) using artificial intelligence and machine learning (AI/ML) to detect advanced persistent threat (APT) campaign activity. There is a tremendous amount of untapped cybersecurity information at the network level that can improve situational awareness, provide context for network events, and reveal the presence of adversaries. PMW 130 solicits white papers describing network-based security technologies and the corresponding tool for evaluation in the AI ATAC Prize Challenge competition.</p>
<p>NAVWARSYSCOM has established $500,000 as the total amount set aside for cash prizes under this Challenge, and in the case of a lone winner, the entire cash prize will be awarded to the winning entry. In the unlikely event of a tie, NAVWARSYSCOM will determine an equitable method of distributing the cash prizes. If a prize goes to a team of Participants, NAVWARSYSCOM will award the cash prize to the individual/team&rsquo;s point of contact registered on the Challenge website.&nbsp;</p> <p>NAWARSYSCOM may award, pursuant to Title 10 U.S.C. &sect; 2371b, a follow-on production contract or transaction to one or more participants who successfully demonstrated an effective AI/ML approach under this Challenge. This Challenge, however, does not in any way obligate NAVWARSYSCOM to procure any of the items within the of this Challenge from the winners. Tax treatment of prizes will be handled in accordance with U.S. Internal Revenue Service guidelines. The winner must provide a U.S. TIN (e.g., a SSN, TIN, EIN) to receive the cash prize.</p> <p>The Challenge winners will be notified via email. NAVWARSYSCOM will announce the winners on the Challenge.gov website and via appropriate channels.</p> <p><strong>Total Cash Prize Pool</strong></p> <p>$500,000</p> <p><strong>Prize Breakdown</strong></p> <p>$500,000 &ndash; 1<sup>st</sup> place</p>
<p>Each Participant (individual, team, or commercial entity) shall submit one entry in response to this Challenge. Team entries or commercial entity entries must have an individual identified as the primary point of contact and prize recipient. By submitting an entry, a Participant confirms ownership of the intellectual property of the submission and authorizes his or her name to be released to the media if the Participant wins the prize.</p> <p>The submission package must include the following three items:</p> <ul> <li><u>White paper</u> *</li> <li><u>Corresponding technology</u> *. Note this must include</li> <ul> <li>The license(s) to operate the technology on a 10 Gb/s bandwidth network through 31 DEC 2020 on multiple VMs simultaneously.</li> </ul> <li><u>User&rsquo;s guide</u>. Note this must include</li> <ul> <li>The recommended base configuration of the technology;</li> <li>A description of the resources required to run the technology to support an up to 1,500 node network, if delivered as software or a VM;</li> <li>A representative sample of output data with field descriptions.</li> </ul> </ul> <p>* the tool and white paper must contain only unclassified material</p> <p>More details on each are provided below.</p> <p>Please email <a href="mailto:[email protected]">[email protected]</a> to indicate your intent to submit prior to 30 JUNE 2020.</p> <p><strong>In order for an entry to be considered, the white paper, corresponding technology, and corresponding user&rsquo;s guide must be submitted no later than 30 JUNE 2020, in accordance with these submission guidelines.</strong></p> <p><strong>White Paper Submission Guidelines:</strong></p> <p>It is suggested that participants use the white paper template when submitting their entry. It provides the framework for an overview of the proposed technology as well as the</p> <ul> <li>Technical approach (e.g. architecture, deployment overview, algorithm description, model description, performance requirements, endpoint footprint, existing results, etc.) including descriptions of the AI/ML components.</li> <li>Benefits and novelty of the approach within the context of existing academic and commercially available technologies</li> </ul> <p>White papers must be collectively no more than six pages in length and regardless of format include the information that is requested in the white paper template.</p> <p>The white paper template can be found <a href="{{ site.baseurl }}/assets/document-library/AI-ATAC-2-Whitepaper-Template-FINAL.docx"><strong>here</strong></a>.</p> <p>Where appropriate, use protective markings such as &ldquo;Do Not Publicly Release &ndash; Trade Secret&rdquo; or &ldquo;Do Not Publicly Release &ndash; Confidential Proprietary Business Information&rdquo; in the Header or Footer of the Submission. Do not submit any Classified information.</p> <p>White papers must be submitted along with the Participant&rsquo;s tool per the instructions outlined in the tool submission guidelines below and user&rsquo;s guide.</p> <p><strong>Tool Submission Guidelines</strong></p> <p>Software, virtual machines and/or virtualized management appliances, or hardware for management appliances must be shipped by trackable, non-postal delivery (FedEx, UPS, DHL, etc.) and received no later than 30 JUNE 2020 at 1700 EDT, to the following address:</p> <p><u>&nbsp;</u></p> <p><u>For courier services (e.g., FedEx, UPS) use:</u></p> <p>Cybersecurity Research Group</p> <p>Oak Ridge National Laboratory</p> <p>Attn:&nbsp; AI ATAC Evaluation Team</p> <p>1 Bethel Valley Road Bldg 6012, Room 209</p> <p>Oak Ridge, TN 37830</p> <p><u>&nbsp;</u></p> <p><u>For USPS use:</u></p> <p>Cybersecurity Research Group</p> <p>Oak Ridge National Laboratory</p> <p>Attn:&nbsp; AI ATAC Evaluation Team</p> <p>P.O. Box 2008, MS 6418</p> <p>Oak Ridge, TN 37831</p> <p>&nbsp;</p> <p><strong>User&rsquo;s Guide Submission Guidelines</strong></p> <p>A user&rsquo;s guide including</p> <ul> <li>A list of the dependencies necessary (e.g. data, platform, network connectivity, etc.) to operate the proposed technology when supporting an up to 1,500 node network;</li> <li>Step-by-step instructions for installation, configuration, and use, including the recommended base configuration of the technology;</li> <li>Documentation describing input data and output data, in particular, describing each field in output logs;</li> <li>A representative sample of output with schema including minimally the descriptions of each field and how to interpret the overall alerts/outputs</li> </ul> <p>Where appropriate, use protective markings such as &ldquo;Do Not Publicly Release &ndash; Trade Secret&rdquo; or &ldquo;Do Not Publicly Release &ndash; Confidential Proprietary Business Information&rdquo; in the Header or Footer of the Submission. Do not submit any Classified information.</p> <p><strong>Terms and Conditions</strong></p> <p><strong>These terms and conditions apply to all participants in the Challenge.</strong></p> <p><strong>Agreement to Terms</strong></p> <p>The Participant agrees to comply with and be bound by the AI ATAC Challenge Background and Rules (&ldquo;the Rules&rdquo;) as well as the Terms and Conditions contained herein. The Participant also agrees that the decisions of the Government, in connection with all matters relating to this Challenge are binding and final.</p> <p><strong>Eligibility</strong></p> <p>The Challenge is open to individual Participants, teams of Participants, and commercial entities. Participants must either own the intellectual property (IP) in the solution or provide documentation demonstrating exclusive arrangements and/or rights with the IP owner. In either case, only one entry for each commercial technology is allowed. Commercial entities must be incorporated in and maintain a primary place of business in the United States (U.S.). Individual Participants and all members of teams of Participants must all be U.S. citizens or U.S. Permanent Residents and be 18 years or older as of 03 MARCH 2020. All Participants (commercial entities or individuals) must have a Social Security Number (SSN), Taxpayer Identification Number (TIN), or Employer Identification Number (EIN) in order to receive a prize. Eligibility is subject to verification before any prize is awarded.</p> <p>Federal Government employees, PMW 130 support contractors and their employees, and Oak Ridge National Laboratory (ORNL) employees are not eligible to participate in this Challenge.&nbsp;</p> <p>Violation of the rules contained herein or intentional or consistent activity that undermines the spirit of the Challenge may result in disqualification. The Challenge is void wherever restricted or prohibited by law.</p> <p><strong>Data Rights</strong></p> <p>NAVWARSYSCOM does not require that Participants relinquish or otherwise grant license rights to intellectual property developed or delivered under the Challenge. NAVWARSYSCOM requires sufficient data rights/intellectual property rights to use, release, display, and disclose the white paper and/or tool, but only to the evaluation team members, and only for purposes of evaluating the Participant submission. The evaluation team does not plan to retain entries after the Challenge is completed but does plan to retain data and aggregate performance statistics, resulting from the evaluation of those entries. By accepting these Terms and Conditions, the Participant consents to the use of data submitted to the evaluation team for these purposes.</p> <p>NAVWARSYSCOM may contact Participants, at no additional cost to the Government, to discuss the means and methods used in solving the Challenge, even if Participants did not win the Challenge. Such contact does not imply any sort of contractual commitment with the Participant.</p> <p>Because of the number of anticipated Challenge entries, NAVWARSYSCOM cannot and will not make determinations on whether or not third-party materials in the Challenge submissions have protectable intellectual property interests. By participating in this Challenge, each Participant (whether participating individually, as a team, or as a commercial entity) warrants and assures the Government that any data used for the purpose of submitting an entry for this Challenge, were obtained legally and through authorized access to such data. By entering the Challenge and submitting the Challenge materials, the Participant agrees to indemnify and hold the Government harmless against any claim, loss or risk of loss for patent or copyright infringement with respect to such third-party interests.</p> <p>This Challenge does not replace or supersede any other written contracts and/or written challenges that the Participant has or will have with the Government, which may require delivery of any materials the Participant is submitting herein for this Challenge effort.&nbsp;</p> <p>This Challenge constitutes the entire understanding of the parties with respect to the Challenge. NAVWARSYSCOM may update the terms of the Challenge from time to time without notice. Participants are strongly encouraged to check the website frequently.</p> <p>If any provision of this Challenge is held to be invalid or unenforceable under applicable federal law, it will not affect the validity or enforceability of the remainder of the Terms and Conditions of this Challenge.</p> <p><strong>Results of Challenge</strong></p> <p>Winners will be announced on the Challenge.gov website and via email. If winners receive notification prior to public announcement, winners shall agree not to disclose its winning status until after the Government releases its announcement.</p> <p><strong>Release of Claims</strong></p> <p>The Participant agrees to release and forever discharge any and all manner of claims, equitable adjustments, actions, suits, debts, appeals, and all other obligations of any kind, whether past or present, known or unknown, that have or may arise from, are related to or are in connection with, directly or indirectly, this Challenge or the Participant&rsquo;s submission.</p> <p><strong>Compliance with Laws</strong></p> <p>The Participant agrees to follow and comply with all applicable federal, state and local laws, regulations and policies.</p> <p><strong>Governing Law</strong></p> <p>This Challenge is subject to all applicable federal laws and regulations. ALL CLAIMS ARISING OUT OF OR RELATING TO THESE TERMS WILL BE GOVERNED BY THE FEDERAL LAWS AND REGULATIONS OF THE UNITED STATES OF AMERICA.</p>
<p>The Challenge evaluation will focus on AI/ML technologies that detect adversarial campaigns via network observable behaviors or by analysis of data collected across an enterprise.</p> <p>The following describes the network being defended in this Challenge:</p> <ul> <li>An enterprise network with a relatively flat architecture of maximum 1,500 endpoints.</li> <li>All north/south and east/west traffic is observable with a mixture of encrypted and unencrypted traffic. Traffic decryption is not provided.</li> <li>Network services include (minimally) a Firewall, Active Directory, Domain Name Service, and Dynamic Addressing (DHCP), and the network data will contain traffic from these services.</li> <li>Enterprise endpoint defense is a commercial, enterprise-grade endpoint security service product, offering host protection. The candidate product will not receive information directly from or rely upon the host endpoint security service.</li> <li>The environment will have a SIEM, potentially with relevant service logs and endpoint logs that will be available for query.</li> <li>Specific data and formats will be shared at the time of technology installation and configuration (see Challenge Timeline &amp; Evaluation Process).</li> </ul> <p>The following describes the scope of the candidate technologies:</p> <ul> <li>This evaluation is focused on detecting adversarial campaigns against a small/medium enterprise network using network observable data and events, with normal proportions of encrypted and unencrypted content.</li> <li>Endpoint detection and protection tools (such as anti-virus or host-based malware detectors), source code analyzers, reverse engineering frameworks, or operations automation frameworks are not eligible.</li> <li>Technologies will be provided a passive tap for the enterprise network, and an ability to interact with the SIEM.</li> <li>Technologies will analyze raw network data streams to determine the presence of an adversarial campaign or individual attack events.</li> <li>Technologies will generate network alerts, ingestible by the SIEM and using an alert format compatible with common SIEMs (e.g. Splunk).</li> <li>Technologies are expected to have an artificial intelligence and/or machine learning component and can also include other complementary approaches, such as signature- or rule-based detection.</li> <li>To measure how AI/ML improves adversarial campaign detection, the test data will use commercially available and custom developed campaigns using various exploitation frameworks.</li> <li>For each adversarial campaign, the range and associated analytic technologies will be reset to a known state.</li> <li>Technologies that leverage learning online or from historical data will be provided three weeks of ambient network data to form their models.</li> <li>Technologies that rely on learning online or from historical data must provide the capability to snapshot, or save, and then reload a built model.</li> <li>Technologies must operate completely on-premises. There will be no external connectivity available during the Challenge. Technologies that require an external/cloud connection will be disqualified.</li> <li>Technologies are expected to provide visibility into their resource usage and computational performance in an easily accessible manner.</li> </ul> <p><strong>Environment</strong></p> <p>The evaluation environment will be the Cybersecurity Operations Research Range (CORR) at Oak Ridge National Laboratory (ORNL). Submitted technologies must include documentation necessary for installation and configuration, and also an evaluation license that covers minimally the following conditions:</p> <ul> <li>A trial period through December 2020,</li> <li>A maximum 10 Gb/s supported network data rate, and/or</li> <li>Supporting traffic from a network up to 1,500 nodes in size.</li> </ul> <p>The software and/or hardware components for the on-premises console must be one of the following:</p> <ul> <li>Exported Virtual Machine Image (e.g., .ova, .qcow2, etc.) that can be run with a libvirt-compatible hypervisor (e.g. QEMU, XenServer, VMWare, Virtualbox, Emulab, etc.)</li> <li>Docker container package</li> <li><em>Standalone&nbsp;</em>hardware appliance that&nbsp;<strong><em><u>will not</u></em></strong><em> be&nbsp;</em>connected to external cloud services</li> </ul> <p><strong>Challenge Evaluation Process</strong></p> <p>The evaluation process will include four major steps once submissions are received.</p> <ol> <li><strong>White Paper Analysis.</strong> ORNL&rsquo;s team with consultation of NAVWAR will review the submission of white papers to determine eligibility and down select to a subset of the technologies to be evaluated.</li> <li><strong>Technology Installation and Configuration.</strong> For those submissions remaining, the evaluation team will attempt to install and configure each technology in the range environment, in order to assure communication with the range data interfaces and tune each technology&rsquo;s configuration. A maximum two-day on-site support by the submitting organization may be required to optimize the configuration. At the conclusion of the Installation and Configuration stage, the technology configuration will be locked.</li> <li><strong>Training &amp; Tuning:</strong> The generation of ambient data for ML model training will happen once the range configuration is locked. Each technology will be exposed to approximately three weeks of traffic for ML model generation.</li> <li><strong>Evaluation:</strong> The actual test evaluation process of the tool will involve levying multiple adversarial campaigns against the enterprise, with increasing complexity and realism. Technologies will be evaluated on their ability to detect elements of each campaign as it progresses. Resource information (CPU usage, memory usage, disk I/O, network I/O) associated with the operation of the technology will be collected during each test within the evaluation.</li> </ol> <p><strong>Scoring</strong></p> <p>This challenge seeks to test how much of an adversarial campaign (sequence of events towards an exploitative goal) the candidate technologies can uncover. The score will be in terms of a cost estimate that simulates the cost to an enterprise using this technology for a given period of time. The cost estimate will sum simulated attack costs, labor costs, and resource costs.</p> <ul> <li><strong>Simulated attack cost:</strong>&nbsp;Estimated costs resulting from each campaign&rsquo;s malicious events will be a function representing attack costs over time. This simulates costs of lost or corrupted data, ransoms, etc. The attack cost function is increasing in time and in steps of the attack kill chain, such that early detection will accrue less cost than later (or no) detection. Simulated attack cost is only accrued for a true positive (alerting on a malicious event) until time of detection, and for a false negative (no alert on a malicious event) resulting in a maximum attack cost.</li> <li><strong>Simulated security operator cost:</strong>&nbsp;Estimated SOC costs, based on actual labor rates of security operators, will be computed for initial setup and ongoing use of the technology. For each alert issued by the technology, labor costs for triage, investigation, and incident response will be incurred. Ongoing security operator costs are incurred for both true positives (alerting on malicious events), and false positives (alerting on benign events).</li> <li><strong>Simulated</strong> <strong>resource cost:&nbsp;</strong>Estimated resource costs for initial setup and ongoing use of the technology will be based on rates from recent research. Ongoing resources that will be monitored and incorporated may include but are not limited to usage of CPU, volatile memory, disk I/O, network I/O.</li> </ul> <p><strong>THE WINNERS OF THE CHALLENGE WILL BE THE PARTICIPANTS WITH THE LOWEST COMPUTED TOTAL COST IN ACCORDANCE WITH THE SCORING ABOVE.</strong></p>
<p>See the Rules section for submission guidelines</p>
true

Description

See FREQUENTLY ASKED QUESTIONS.

The Naval Information Warfare Systems Command (NAVWARSYSCOM) and the Program Executive Office for Command, Control, Communications, Computers, and Intelligence (PEO C4I) are conducting a second instance of the Artificial Intelligence Applications to Autonomous Cybersecurity (AI ATAC), pronounced “AI attack” Challenge (hereinafter referred to as “the Challenge”). The Navy’s Cybersecurity Program Office (PMW 130) seeks to automate the Security Operations Center (SOC) using artificial intelligence and machine learning (AI/ML) to detect advanced persistent threat (APT) campaign activity. There is a tremendous amount of untapped cybersecurity information at the network level that can improve situational awareness, provide context for network events, and reveal the presence of adversaries. PMW 130 solicits white papers describing network-based security technologies and the corresponding tool for evaluation in the AI ATAC Prize Challenge competition.

Prizes

NAVWARSYSCOM has established $500,000 as the total amount set aside for cash prizes under this Challenge, and in the case of a lone winner, the entire cash prize will be awarded to the winning entry. In the unlikely event of a tie, NAVWARSYSCOM will determine an equitable method of distributing the cash prizes. If a prize goes to a team of Participants, NAVWARSYSCOM will award the cash prize to the individual/team’s point of contact registered on the Challenge website. 

NAWARSYSCOM may award, pursuant to Title 10 U.S.C. § 2371b, a follow-on production contract or transaction to one or more participants who successfully demonstrated an effective AI/ML approach under this Challenge. This Challenge, however, does not in any way obligate NAVWARSYSCOM to procure any of the items within the of this Challenge from the winners. Tax treatment of prizes will be handled in accordance with U.S. Internal Revenue Service guidelines. The winner must provide a U.S. TIN (e.g., a SSN, TIN, EIN) to receive the cash prize.

The Challenge winners will be notified via email. NAVWARSYSCOM will announce the winners on the Challenge.gov website and via appropriate channels.

Total Cash Prize Pool

$500,000

Prize Breakdown

$500,000 – 1st place

Rules

Each Participant (individual, team, or commercial entity) shall submit one entry in response to this Challenge. Team entries or commercial entity entries must have an individual identified as the primary point of contact and prize recipient. By submitting an entry, a Participant confirms ownership of the intellectual property of the submission and authorizes his or her name to be released to the media if the Participant wins the prize.

The submission package must include the following three items:

  • White paper *
  • Corresponding technology *. Note this must include
    • The license(s) to operate the technology on a 10 Gb/s bandwidth network through 31 DEC 2020 on multiple VMs simultaneously.
  • User’s guide. Note this must include
    • The recommended base configuration of the technology;
    • A description of the resources required to run the technology to support an up to 1,500 node network, if delivered as software or a VM;
    • A representative sample of output data with field descriptions.

* the tool and white paper must contain only unclassified material

More details on each are provided below.

Please email [email protected] to indicate your intent to submit prior to 30 JUNE 2020.

In order for an entry to be considered, the white paper, corresponding technology, and corresponding user’s guide must be submitted no later than 30 JUNE 2020, in accordance with these submission guidelines.

White Paper Submission Guidelines:

It is suggested that participants use the white paper template when submitting their entry. It provides the framework for an overview of the proposed technology as well as the

  • Technical approach (e.g. architecture, deployment overview, algorithm description, model description, performance requirements, endpoint footprint, existing results, etc.) including descriptions of the AI/ML components.
  • Benefits and novelty of the approach within the context of existing academic and commercially available technologies

White papers must be collectively no more than six pages in length and regardless of format include the information that is requested in the white paper template.

The white paper template can be found here.

Where appropriate, use protective markings such as “Do Not Publicly Release – Trade Secret” or “Do Not Publicly Release – Confidential Proprietary Business Information” in the Header or Footer of the Submission. Do not submit any Classified information.

White papers must be submitted along with the Participant’s tool per the instructions outlined in the tool submission guidelines below and user’s guide.

Tool Submission Guidelines

Software, virtual machines and/or virtualized management appliances, or hardware for management appliances must be shipped by trackable, non-postal delivery (FedEx, UPS, DHL, etc.) and received no later than 30 JUNE 2020 at 1700 EDT, to the following address:

 

For courier services (e.g., FedEx, UPS) use:

Cybersecurity Research Group

Oak Ridge National Laboratory

Attn: AI ATAC Evaluation Team

1 Bethel Valley Road Bldg 6012, Room 209

Oak Ridge, TN 37830

 

For USPS use:

Cybersecurity Research Group

Oak Ridge National Laboratory

Attn: AI ATAC Evaluation Team

P.O. Box 2008, MS 6418

Oak Ridge, TN 37831

 

User’s Guide Submission Guidelines

A user’s guide including

  • A list of the dependencies necessary (e.g. data, platform, network connectivity, etc.) to operate the proposed technology when supporting an up to 1,500 node network;
  • Step-by-step instructions for installation, configuration, and use, including the recommended base configuration of the technology;
  • Documentation describing input data and output data, in particular, describing each field in output logs;
  • A representative sample of output with schema including minimally the descriptions of each field and how to interpret the overall alerts/outputs

Where appropriate, use protective markings such as “Do Not Publicly Release – Trade Secret” or “Do Not Publicly Release – Confidential Proprietary Business Information” in the Header or Footer of the Submission. Do not submit any Classified information.

Terms and Conditions

These terms and conditions apply to all participants in the Challenge.

Agreement to Terms

The Participant agrees to comply with and be bound by the AI ATAC Challenge Background and Rules (“the Rules”) as well as the Terms and Conditions contained herein. The Participant also agrees that the decisions of the Government, in connection with all matters relating to this Challenge are binding and final.

Eligibility

The Challenge is open to individual Participants, teams of Participants, and commercial entities. Participants must either own the intellectual property (IP) in the solution or provide documentation demonstrating exclusive arrangements and/or rights with the IP owner. In either case, only one entry for each commercial technology is allowed. Commercial entities must be incorporated in and maintain a primary place of business in the United States (U.S.). Individual Participants and all members of teams of Participants must all be U.S. citizens or U.S. Permanent Residents and be 18 years or older as of 03 MARCH 2020. All Participants (commercial entities or individuals) must have a Social Security Number (SSN), Taxpayer Identification Number (TIN), or Employer Identification Number (EIN) in order to receive a prize. Eligibility is subject to verification before any prize is awarded.

Federal Government employees, PMW 130 support contractors and their employees, and Oak Ridge National Laboratory (ORNL) employees are not eligible to participate in this Challenge. 

Violation of the rules contained herein or intentional or consistent activity that undermines the spirit of the Challenge may result in disqualification. The Challenge is void wherever restricted or prohibited by law.

Data Rights

NAVWARSYSCOM does not require that Participants relinquish or otherwise grant license rights to intellectual property developed or delivered under the Challenge. NAVWARSYSCOM requires sufficient data rights/intellectual property rights to use, release, display, and disclose the white paper and/or tool, but only to the evaluation team members, and only for purposes of evaluating the Participant submission. The evaluation team does not plan to retain entries after the Challenge is completed but does plan to retain data and aggregate performance statistics, resulting from the evaluation of those entries. By accepting these Terms and Conditions, the Participant consents to the use of data submitted to the evaluation team for these purposes.

NAVWARSYSCOM may contact Participants, at no additional cost to the Government, to discuss the means and methods used in solving the Challenge, even if Participants did not win the Challenge. Such contact does not imply any sort of contractual commitment with the Participant.

Because of the number of anticipated Challenge entries, NAVWARSYSCOM cannot and will not make determinations on whether or not third-party materials in the Challenge submissions have protectable intellectual property interests. By participating in this Challenge, each Participant (whether participating individually, as a team, or as a commercial entity) warrants and assures the Government that any data used for the purpose of submitting an entry for this Challenge, were obtained legally and through authorized access to such data. By entering the Challenge and submitting the Challenge materials, the Participant agrees to indemnify and hold the Government harmless against any claim, loss or risk of loss for patent or copyright infringement with respect to such third-party interests.

This Challenge does not replace or supersede any other written contracts and/or written challenges that the Participant has or will have with the Government, which may require delivery of any materials the Participant is submitting herein for this Challenge effort. 

This Challenge constitutes the entire understanding of the parties with respect to the Challenge. NAVWARSYSCOM may update the terms of the Challenge from time to time without notice. Participants are strongly encouraged to check the website frequently.

If any provision of this Challenge is held to be invalid or unenforceable under applicable federal law, it will not affect the validity or enforceability of the remainder of the Terms and Conditions of this Challenge.

Results of Challenge

Winners will be announced on the Challenge.gov website and via email. If winners receive notification prior to public announcement, winners shall agree not to disclose its winning status until after the Government releases its announcement.

Release of Claims

The Participant agrees to release and forever discharge any and all manner of claims, equitable adjustments, actions, suits, debts, appeals, and all other obligations of any kind, whether past or present, known or unknown, that have or may arise from, are related to or are in connection with, directly or indirectly, this Challenge or the Participant’s submission.

Compliance with Laws

The Participant agrees to follow and comply with all applicable federal, state and local laws, regulations and policies.

Governing Law

This Challenge is subject to all applicable federal laws and regulations. ALL CLAIMS ARISING OUT OF OR RELATING TO THESE TERMS WILL BE GOVERNED BY THE FEDERAL LAWS AND REGULATIONS OF THE UNITED STATES OF AMERICA.

Judging Criteria

The Challenge evaluation will focus on AI/ML technologies that detect adversarial campaigns via network observable behaviors or by analysis of data collected across an enterprise.

The following describes the network being defended in this Challenge:

  • An enterprise network with a relatively flat architecture of maximum 1,500 endpoints.
  • All north/south and east/west traffic is observable with a mixture of encrypted and unencrypted traffic. Traffic decryption is not provided.
  • Network services include (minimally) a Firewall, Active Directory, Domain Name Service, and Dynamic Addressing (DHCP), and the network data will contain traffic from these services.
  • Enterprise endpoint defense is a commercial, enterprise-grade endpoint security service product, offering host protection. The candidate product will not receive information directly from or rely upon the host endpoint security service.
  • The environment will have a SIEM, potentially with relevant service logs and endpoint logs that will be available for query.
  • Specific data and formats will be shared at the time of technology installation and configuration (see Challenge Timeline & Evaluation Process).

The following describes the scope of the candidate technologies:

  • This evaluation is focused on detecting adversarial campaigns against a small/medium enterprise network using network observable data and events, with normal proportions of encrypted and unencrypted content.
  • Endpoint detection and protection tools (such as anti-virus or host-based malware detectors), source code analyzers, reverse engineering frameworks, or operations automation frameworks are not eligible.
  • Technologies will be provided a passive tap for the enterprise network, and an ability to interact with the SIEM.
  • Technologies will analyze raw network data streams to determine the presence of an adversarial campaign or individual attack events.
  • Technologies will generate network alerts, ingestible by the SIEM and using an alert format compatible with common SIEMs (e.g. Splunk).
  • Technologies are expected to have an artificial intelligence and/or machine learning component and can also include other complementary approaches, such as signature- or rule-based detection.
  • To measure how AI/ML improves adversarial campaign detection, the test data will use commercially available and custom developed campaigns using various exploitation frameworks.
  • For each adversarial campaign, the range and associated analytic technologies will be reset to a known state.
  • Technologies that leverage learning online or from historical data will be provided three weeks of ambient network data to form their models.
  • Technologies that rely on learning online or from historical data must provide the capability to snapshot, or save, and then reload a built model.
  • Technologies must operate completely on-premises. There will be no external connectivity available during the Challenge. Technologies that require an external/cloud connection will be disqualified.
  • Technologies are expected to provide visibility into their resource usage and computational performance in an easily accessible manner.

Environment

The evaluation environment will be the Cybersecurity Operations Research Range (CORR) at Oak Ridge National Laboratory (ORNL). Submitted technologies must include documentation necessary for installation and configuration, and also an evaluation license that covers minimally the following conditions:

  • A trial period through December 2020,
  • A maximum 10 Gb/s supported network data rate, and/or
  • Supporting traffic from a network up to 1,500 nodes in size.

The software and/or hardware components for the on-premises console must be one of the following:

  • Exported Virtual Machine Image (e.g., .ova, .qcow2, etc.) that can be run with a libvirt-compatible hypervisor (e.g. QEMU, XenServer, VMWare, Virtualbox, Emulab, etc.)
  • Docker container package
  • Standalone hardware appliance that will not be connected to external cloud services

Challenge Evaluation Process

The evaluation process will include four major steps once submissions are received.

  1. White Paper Analysis. ORNL’s team with consultation of NAVWAR will review the submission of white papers to determine eligibility and down select to a subset of the technologies to be evaluated.
  2. Technology Installation and Configuration. For those submissions remaining, the evaluation team will attempt to install and configure each technology in the range environment, in order to assure communication with the range data interfaces and tune each technology’s configuration. A maximum two-day on-site support by the submitting organization may be required to optimize the configuration. At the conclusion of the Installation and Configuration stage, the technology configuration will be locked.
  3. Training & Tuning: The generation of ambient data for ML model training will happen once the range configuration is locked. Each technology will be exposed to approximately three weeks of traffic for ML model generation.
  4. Evaluation: The actual test evaluation process of the tool will involve levying multiple adversarial campaigns against the enterprise, with increasing complexity and realism. Technologies will be evaluated on their ability to detect elements of each campaign as it progresses. Resource information (CPU usage, memory usage, disk I/O, network I/O) associated with the operation of the technology will be collected during each test within the evaluation.

Scoring

This challenge seeks to test how much of an adversarial campaign (sequence of events towards an exploitative goal) the candidate technologies can uncover. The score will be in terms of a cost estimate that simulates the cost to an enterprise using this technology for a given period of time. The cost estimate will sum simulated attack costs, labor costs, and resource costs.

  • Simulated attack cost: Estimated costs resulting from each campaign’s malicious events will be a function representing attack costs over time. This simulates costs of lost or corrupted data, ransoms, etc. The attack cost function is increasing in time and in steps of the attack kill chain, such that early detection will accrue less cost than later (or no) detection. Simulated attack cost is only accrued for a true positive (alerting on a malicious event) until time of detection, and for a false negative (no alert on a malicious event) resulting in a maximum attack cost.
  • Simulated security operator cost: Estimated SOC costs, based on actual labor rates of security operators, will be computed for initial setup and ongoing use of the technology. For each alert issued by the technology, labor costs for triage, investigation, and incident response will be incurred. Ongoing security operator costs are incurred for both true positives (alerting on malicious events), and false positives (alerting on benign events).
  • Simulated resource cost: Estimated resource costs for initial setup and ongoing use of the technology will be based on rates from recent research. Ongoing resources that will be monitored and incorporated may include but are not limited to usage of CPU, volatile memory, disk I/O, network I/O.

THE WINNERS OF THE CHALLENGE WILL BE THE PARTICIPANTS WITH THE LOWEST COMPUTED TOTAL COST IN ACCORDANCE WITH THE SCORING ABOVE.

How To Enter

See the Rules section for submission guidelines