diff --git a/docs/srde/01_getting_started/01_intro.md b/docs/srde/01_getting_started/01_intro.md new file mode 100644 index 00000000..a1e8f57d --- /dev/null +++ b/docs/srde/01_getting_started/01_intro.md @@ -0,0 +1,5 @@ +# Start here! + +Welcome to the Secure Research Data Environment documentation! If you do not have an active project, please proceed to the next section that explains the eligibility criteria and how you may request one. + +If you are an active user, you can proceed to one of the categories on the left. diff --git a/docs/srde/01-intro.md b/docs/srde/01_getting_started/02_eligibility_accounts.md similarity index 96% rename from docs/srde/01-intro.md rename to docs/srde/01_getting_started/02_eligibility_accounts.md index 4ccb9949..9b9f2a04 100644 --- a/docs/srde/01-intro.md +++ b/docs/srde/01_getting_started/02_eligibility_accounts.md @@ -1,10 +1,6 @@ ---- -sidebar_position: 1 ---- # Secure Research Data Environment (SRDE) - The Secure Research Data Environment (SRDE) is a centralized secure computing platform designed to support research projects that require storage and computational resources. It provides a space for researchers to design and build secure, scalable, and resilient environments to store, share, and analyze moderate and high-risk data, as per the [NYU Electronic Data and System Risk Classification Policy](https://www.nyu.edu/about/policies-guidelines-compliance/policies-and-guidelines/electronic-data-and-system-risk-classification.html). The Research Technology Services team leverages cloud computing resources to provide flexible, scalable, remotely accessible secure virtual environments. The team provides researchers with consultations and resources to comply with security requirements of research grants and Data Use Agreements (DUAs). SRDE resources intend to meet the security controls outlined in the NIST SP 800-171 to safeguard Controlled Unclassified Information (CUI). @@ -26,11 +22,3 @@ The SRDE form contains details about the project, such as if the project require [Link to the Secure Research Data Environment Intake Form](https://nyu.qualtrics.com/jfe/form/SV_3Vok9ax87Bxxdsy). After you submit the intake form, the SRDE team will review the submitted documents and will respond to schedule the consultation. - -## SRDE User Guide - -Coming soon! - -## Support - -Please email your questions to: srde-support@nyu.edu diff --git a/docs/srde/01_getting_started/_category_.json b/docs/srde/01_getting_started/_category_.json new file mode 100644 index 00000000..c631500f --- /dev/null +++ b/docs/srde/01_getting_started/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "Getting Started", +} diff --git a/docs/srde/02_user_guide/01_connecting.mdx b/docs/srde/02_user_guide/01_connecting.mdx new file mode 100644 index 00000000..41f1fc67 --- /dev/null +++ b/docs/srde/02_user_guide/01_connecting.mdx @@ -0,0 +1,335 @@ +# Connecting to SRDE + +Remote access to the secure environment workspace via the Command Line Interface. The SRDE consists of two separate servers: +- The Workspace Host is where you access and analyze data +- The Bastion Host acts as a proxy that allows your laptop/workstation to connect securely via the internet to the Workspace Host. + +Accessing the Secure environment Workspace Host remotely via the Command Line Interface (CLI) is a two-step process: First you must connect to the Bastion Host and then from the Bastion Host access your Workspace Host. The two-step process is demonstrated below: +![Virtual Private Network](./static/vpc_basics.png) + +This two-step process is enabled with the use of SSH keys and SSH Agent Forwarding and is described in detail for some of the common Operating Systems (MacOS/Linux and Windows) in the following sections of the User Guide. For more general information about using SSH keys and the use of Bastion Host [see here](https://medium.com/devops-dudes/setting-up-an-ssh-agent-to-access-the-bastion-in-vpc-532918577949). + +User access to the secure environment is controlled by [Identity-Aware Proxy (IAP)](https://cloud.google.com/security/products/iap?hl=en) Google Cloud service. IAP provides a central way of managing user access and enforcing access control policies, without requiring external/public IP addresses on the Bastion Host and the Workspace Host. + +::::tip[Prerequisites] +In order to be able to access your Secure Environment Workspace Host, you will need the following information, provided by the Secure Research Data Admins: +- **Project Id** for the Bastion Host (ex. test-dev1-bastion-1234) +- **Zone Name** (ex. us-east4-a) + +:::note +At this time you are not required to be on the NYU Network (or VPN into the NYU Network) in order to access the Secure Environment workspace. + +::: + +:::: + + +## Connecting through Google Cloud Console +Navigate to Google Cloud Console https://console.cloud.google.com/welcome and login with your NetID. Click the Select a project drop-down list at the top left corner of the page. In the Select a project window that appears, search and select the bastion project using the provided project ID (ex. test-dev1-bastion-1234). + +![Select a project](./static/select_project.png) + +Once selected, navigate to the VM Instances page via the Navigation menu (Menu in the top left corner of the page ) > Compute Engine > VM Instances. A running Bastion instance will be visible in the page as shown below: + +![Bastion Instance](./static/bastion_instance.png) + +ssh to the Bastion instance by clicking on the SSH button, a new SSH-in-browser tab will appear with a restricted CLI ( Command line interface ) connected to the instance. We are now inside the Bastion Host. + +![SSH in browser](./static/ssh_in_browser.png) + +Now we can ssh to our workspace host by using the workspace internal IP address `10.0.0.2`: +```sh +ssh 10.0.0.2 +``` +This will open the workspace CLI, with access to the workspace host having the computing needs to work on our data. + +## Connecting through Google Cloud Shell + +Navigate to https://shell.cloud.google.com/ while logged in using your NetID. + +### Setting project and zone + +Note - Ask your SRDE administrator for the appropriate GCP PROJECT_ID and ZONE_NAME. Replace the values in the two commands below and run them + + +```sh +gcloud config set project PROJECT_ID +gcloud config set compute/zone ZONE_NAME +``` + +### Confirm settings + +Before proceeding, confirm that the project and zone match your GCP project ID and zone: + + +```sh +gcloud config list + + +[compute] +region = us-east4 +zone = us-east4-a +[core] +account = netid@nyu.edu +disable_usage_reporting = False +project = test-dev1-bastion-1234 + + +Your active configuration is: [default] +``` + + +### Generate SSH keys + +:::tip[Unused keys expire!] +Google Cloud Shell will delete your files, including generated SSH keys, if they are not accessed for 120 days. If this happens you will need to generate them again. + +::: + +The simplest way to generate SSH keys is to delegate the key generation to gcloud. In order to trigger key creation, run the following command. + +:::note +Ignore the result of this command. It will most likely print errors to the output console. + +::: + +```sh +gcloud compute ssh bastion-vm +``` + +You will be prompted to enter an SSH passphrase. This is optional, however it is recommended for additional user security. +![Getting into Bastion](./static/getting_into_bastion.png) + +The above command should log you into the bastion VM. You will see a prompt like: +```sh +-bash-4.4$” +``` +Before proceeding, exit back to your local machine + +```sh +exit +``` + +Then make sure the above step created two keys in your ssh home directory (`~/.ssh`) as shown below: + +```sh +ls ~/.ssh +``` + +![List ssh keys](./static/ls_dot_ssh.png) + +Start the ssh-agent on your local machine + +```sh +eval `ssh-agent -s` +``` +Add the google_compute_engine key to your ssh session + +```sh +ssh-add ~/.ssh/google_compute_engine +``` +Connect to the instance with gcloud using the –ssh-flag-”-A” flag +:::note +This command uses the default project and zone set above. + +::: + +```sh +gcloud compute ssh bastion-vm --ssh-flag="-A" --tunnel-through-iap +``` + +### Add SSH key to session +Run the following command to add the google_compute_engine key to the current session:ssh +```sh +ssh-add -L +``` +Connect to the workstation-vm + +```sh +ssh 10.0.0.2 +``` + +### Future logins +After the initial login, you will not need to regenerate the SSH keys, but you will need the rest of the command sequence from “Start the SSH agent”. On your local machine: +```sh +eval `ssh-agent -s` +ssh-add ~/.ssh/google_compute_engine +gcloud compute ssh bastion-vm --ssh-flag="-A" --tunnel-through-iap --project=PROJECT_ID +``` + +And then on the bastion VM: + +```sh +ssh 10.0.0.2 +``` + + +## Connecting on MacOS/Linux + +### Install gcloud CLI +Follow the [official guidelines](https://cloud.google.com/sdk/docs/install-sdk#installing_the_latest_version) to install the latest version of gcloud CLI locally on your computer. + +:::note +After completing the gcloud installation, verify that the `gcloud` binary is in your $PATH environment variable. + +::: + +### Configure local gcloud settings +Run the following command. It generates a link as shown below + +```sh +gcloud auth login --no-launch-browser +``` + +![GCP authenticate login](./static/auth_login.png) + +Copy the link and open your chrome browser in incognito mode to perform user sign in.Username is your NYU NetID email address. For e.g. netid@nyu.edu + + +You will be redirected to the NYU SSO page and MFA verification through Duo Push. After successfully logging in, you will be asked to allow google SDK to access your account as shown below + +![OAuth consent screen](./static/oauth_consent.png) + +Pressing the “Allow” button on this page will present the authorization code. Copy the code and paste it in the terminal. If this step is successful, you should see this text printed to the console. **You are now logged in as [netid@nyu.edu].** + +### Connect to the workspace +Follow the same instructions for connecting with Google Cloud Shell above, starting from section on setting project and zone above. + +## Connecting on Windows 10/11 + +### Start and Configure SSH-Agent Service +Using an elevated PowerShell window (run as admin), execute the following command to install the SSH-Agent service and configure it to start automatically when you log into your machine: +```ps +Get-Service ssh-agent | Set-Service -StartupType Automatic -PassThru | Start-Service + +``` +![Run a script in PowerShell](./static/powershell_script.png) + +### Install gcloud CLI +Download the [Google Cloud CLI installer] (https://dl.google.com/dl/cloudsdk/channels/rapid/GoogleCloudSDKInstaller.exe) and run the installer + +![GCloud Installer](./static/gcloud_windows_installer.png) + +Alternatively, run the following command to download and install: +```ps +(New-Object Net.WebClient).DownloadFile("https://dl.google.com/dl/cloudsdk/channels/rapid/GoogleCloudSDKInstaller.exe", "$env:Temp\GoogleCloudSDKInstaller.exe") + +& $env:Temp\GoogleCloudSDKInstaller.exe + +``` + +### Install Git +Download the Git Bash setup from the official website: https://git-scm.com/ and run the installer + + +### Install Putty +Download and install Putty from this link https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html + +Post installation verify that the Putty authentication agent is installed and available + +For 64-bit installer, you will find this executable at `C:/Program Files/PuTTY/pageant.exe` + + +### Install Python (>version 3.0) +Install Python from the official website:https://www.python.org/downloads/ + +Remember to check “Add python to the environment path.” ***add screenshot + +Make sure it's installed and available on PATH. On many systems Python comes pre-installed, you can try running the python command to start the Python interpreter to check and see if it is already installed. + +![Python in CMD Prompt](./static/cmd_prompt_python.png) + +On windows you can also try the py command which is a launcher which is more likely to work. If it is installed you will see a response which will include the version number, for example: + + +![Py in CMD Prompt](./static/cmd_prompt_py.png) + +### Logging in: +Authenticate gcloud by starting a new session of command line or powershell. initialize and login to gcloud with your account (you will be redirected to the browser for authentication) +```ps +gcloud auth login +``` + +![OAuth on windows](./static/oauth_consent_windows.png) + +![GCP login CMD prompt](./static/cmd_prompt_gcp_login.png) + +Run Git Bash and start the ssh-agent on your local machine + +```ps +eval `ssh-agent -s` +``` +![SSH Agent command](./static/mingw_eval_ssh.png) + +Add the SSH key to agent by running + +```ps +pageant.exe +``` +![PAgent Windows](./static/pagent_windows.png) + +The app runs in the background. you can find it in the tray. +![PAgent in tray](./static/pagent_in_tray.png) + +Right click the icon and select "Add Key". Add the google_compute_engine key with the PPK extension (~/.ssh/google_compute_engine) to your agent: +![PPK Add Key](./static/ppk_add_key.png) + + +:::Skip this step in the future +Go to the Pageant shortcut icon from the Windows Start Menu or your desktop. + +Right click on the icon, and click on Properties. (If Properties is not an option on the menu, click on Open file location, then right click on the Pageant icon, and click on Properties) + +::: + +![PPK properties](./static/ppk_properties_trust.png) + +From the Shortcut tab, edit the Target field. Leave the path to pageant.exe intact. After that path, add the path to your Google .ppk key file. + +:::warning[Critical] +The key path should be outside the quotation marks. i + +::: + +Here’s an example: +```ps +"C:\Program Files\PuTTY\pageant.exe" C:\Users\Sam\.ssh\google_compute_engine.ppk +``` + +![PAgent properties](./static/pagent_properties_popout.png) + +### SSH into the bastion VM from Git Bash +:::tip +Ask your SRDE administrator for the appropriate GCP project ID. + +::: + +Replace gcp-project-id with that information in the below command: +```ps +export PROJECT_ID=gcp-project-id; + +gcloud compute ssh bastion-vm --ssh-flag="-A" --zone=us-east4-a --tunnel-through-iap --project=${PROJECT_ID} + +``` +![Export Project ID and login](./static/mingw_export_project_gcloud_login.png) + +When SSHing to bastion in the git bash window, a new terminal in putty appears with the bastion connection +![PUTTy bastion](./static/putty_bastion.png) + +A PuTTY security alert window may pop up to accept the host key, click on Accept +![PUTTy security alert](./static/putty_security_alert.png) + +### Add SSH key to session + +Run ssh-add to add the google_compute_engine key to the current session +```ps +ssh-add -L +``` + +Connect to the workstation-vm + +```ps +ssh 10.0.0.2 +``` + +![PUTTy ssh to vm](./static/putty_ssh_to_vm.png) diff --git a/docs/srde/02_user_guide/02_data_access.mdx b/docs/srde/02_user_guide/02_data_access.mdx new file mode 100644 index 00000000..4e3c6cfa --- /dev/null +++ b/docs/srde/02_user_guide/02_data_access.mdx @@ -0,0 +1,32 @@ +# Data Access +After you are connected to the Workspace Host using SSH per the instructions in the previous section, you can access data that has been placed in the workspace ingress bucket by your Data Steward. You can use the gsutil ls and cp commands to copy the data into your home directory using the steps described below. + +Use the following command to see list of folders in your workspace: +```sh +gsutil ls +``` + +![Running "gsutils ls"](./static/gsutils_ls.png) + +As shown above, there are several folders. Data that has been transferred into the workspace is in the Ingress folder. Use the following command to list the objects in the ingress folder, replacing the path with your project’s path: + +```sh +gsutil ls gs://your-workspace-ingress-path +``` + +![Running "gsutils ls" on your workspace path](./static/gsutils_ls_on_your_workspace_path.png) + +List the contents of the folder with the timestamp corresponding to the date the data was transferred into the workspace, and you will see the files that were uploaded: +```sh +gsutil ls gs://your-workspace-ingress-path/data-timestamp-folder +``` + +![Running "gsutils ls" on timestamped folder within your workspace path](./static/gsutils_ls_on_your_workspace_path_timestamped_folder.png) + +To copy the files into your home directory use gsutil cp command (use period at end to copy to your home directory): + +```sh +gsutil cp gs://your-workspace-ingress-path/data-timestamp-folder/filename . +``` +![Copying with "gsutils cp"](./static/gsutils_cp.png) + diff --git a/docs/srde/02_user_guide/03_data_transfers.mdx b/docs/srde/02_user_guide/03_data_transfers.mdx new file mode 100644 index 00000000..ab795e0c --- /dev/null +++ b/docs/srde/02_user_guide/03_data_transfers.mdx @@ -0,0 +1,162 @@ +# Managing Data Transfer + +Every research project that is using the Secure Research Data Environment (SRDE) must have an assigned Data Steward. The Data Steward is responsible for ingesting to and egressing data from the secure environment, following the processes described below. Currently, the data steward role cannot be combined with other roles in the project, in other words, a data steward cannot also be a user/researcher in the project. + +:::tip[Data Steward role] +The project PI must inform the SRDE team who the assigned Data Steward is for the research project before the project is deployed on the SRDE. + +::: + +## Data Ingestion process + +Ingesting data into the secure environment is a two-step process; First the Data Steward must upload the data onto the staging GCP Storage Bucket and then “push” the data into the secure Workspace environment. +![Data ingestion process overview](./static/data_ingestion_process_overview.png) + +### Uploading Data to the Staging Area + +#### Option1: Using the Web Console Interface +Log into GCP console, set project to your staging project (i.e. srde-staging-dev), and navigate on the side panel to Cloud Storage -> Buckets: +![GCP Cloud Storage Buckets](./static/gcp_cloud_storage_buckets.png) + +Navigate to your research workspace’s corresponding Staging Ingress bucket: +![GCP Cloud Storage staging ingress buckets](./static/gcp_staging_ingress_bucket.png) + +Copy data to the Staging Ingress bucket: +![GCP Cloud Storage copy to ingress bucket](./static/gcp_copy_to_ingress_bucket.png) + +#### Option2: Using the CLI +Follow the instructions in section 2 to install and configure gcloud on your workstation. Once this is done, run the following command to find your workspace’s bucket: +```sh +gsutil ls | fgrep [Workspace Name] +``` + +The workspace name will be given to you by the SRDE team after your workspace has been provisioned. The command above should output two buckets– one will be for data ingest (ingress) and the other will be for data egress: +```sh +nyu10003@cloudshell:~ (srde-staging-dev-cedd)$ gsutil ls | fgrep example +gs://nyu-us-east4-example-staging-egress-9d94/ +gs://nyu-us-east4-example-staging-ingress-4bd9/ +``` + +To ingest data into the SRDE, run the following command to copy individual files into the ingress bucket: +```sh +gsutil cp [FILENAME] gs://[INGRESS BUCKET] +``` + +So for instance, the following command would copy an individual text file (1661-0.txt) into the example ingress bucket: +```sh +gsutil cp 1661-0.txt gs://nyu-us-east4-example-staging-ingress-4bd9/ +``` + +To copy a folder, you need to add -r after cp: +```sh +gsutil cp -r [FOLDER] gs://[INGRESS BUCKET] +``` + +We would use the following command to copy a folder named dataset into the example ingress bucket: +```sh +gsutil cp -r dataset gs://nyu-us-east4-example-staging-ingress-4bd9/ +``` + +### Push Data to the Research Workspace Using Airflow +Once the data is in the Staging Ingress bucket, navigate to Cloud Composer and click on Airflow: +![Push data using airflow](./static/push_using_airflow.png) + +In Airflow you will see the DAG workflows for your project. If you do not see any DAGs, contact srde-support@nyu.edu with subject line “Missing Airflow Permissions” +![Airflow permissions](./static/airflow_permissions.png) + +Once you see the workflows for your project, pick the one named **[project-id]_Ingress_1_Staging_to_Workspace**, which will bring you to the DAG page. On the DAG page, click on the “play” button at the top right to trigger the DAG: +![DAG trigger](./static/dag_trigger.png) + +The DAG may take a few minutes to run. You can see its progress in the status display on the bottom left. +![DAG status](./static/dag_status.png) + +The display shows a list of tasks executed by the DAG. A light green square will appear next to the task when it is running, and turn dark green when it is complete. When all tasks have finished successfully, the DAG is done. + +Researchers will now be able to see the data in the ingress bucket in the research project workspace. + +:::note[Access policy for Data Stewards] +Data stewards do not have access to the research project workspace. +::: + +Instructions for researchers who need to access the ingested data in the research workspace are found in the [Data Access section](02_data_access.mdx) of this document. + +## Data Egress Process +To transport data out of the SRDE project workspace, research team members copy files to be exported to the 'export' folder in the Researcher Workspace Egress bucket, sample command below: +![Data egress via gsutils cp](./static/gsutils_cp_egress.png) + +After the files have been copied to the export folder in the egress bucket within the workspace, researchers will notify the Data Steward that they are ready to export. The Data Steward will first move the files to the Staging Egress folder and scan them using the Data Loss Prevention API, a tool for automatically detecting sensitive data types. Next, they will check the generated report and either pass the inspection or fail it. Passing the inspection moves the data onwards to the Research Data Egress project for external sharing. Failing the inspection blocks the export. +![Data egress process overview](./static/data_egress_process_overview.png) + +### Push the data from the Research Workspace to Staging +First, run Egress DAG #1 to move files to the Staging Egress folder. Follow the same instructions as above to navigate to the Airflow page. + +Once on the Airflow page, find the DAG named **[project-id]_Egress_1_Workspace_to_Staging_Inspection.** +![Find the relevant Airflow DAG](./static/find_relevant_dag.png) + +Once on the DAG page, follow the steps to trigger the DAG, as instructed above. This DAG executes several tasks: +- An archive copy of the export files is created within the workspace. +- The export files are moved to the staging environment. +- A DLP inspection is run to scan the exported files for sensitive data. +The DLP scan may take some time to run, so wait for all tasks to be marked as successful (dark green) before proceeding. + +### Check the DLP inspection report +After Stage 1 is successfully completed, the DLP inspection findings are written to BigQuery. To examine results, navigate to BigQuery by going to Google console webpage, typing BigQuery on the search bar, and selecting it from the list. +![Big Query](./static/big_query.png) + +Once in BigQuery, on the Explorer tab on the left, click on the corresponding project, then on the table that corresponds to the scan that was done. The name will contain the UTC date and time of the scan, using the format **dlp_YYYY-MM-DD-HHMMSS**. You can verify the report’s creation time under the “Details” tab. +![Big Query exporter tab](./static/big_query_exporter_tab.png) + +Select “Query > In new tab” to examine the results. The following default query will return a sample of 1000 results: +```sql +SELECT * FROM “table_id” LIMIT 1000 +``` +For more information on querying the DLP report, see the DLP Interpretation Guide (TODO Add section on DLP Interp. guide!) + +Click on Run to run the query and review the results of the scan. After running the query you will see the results on the lower half of the window: +![Big Query scan result](./static/big_query_scan_result.png) + +### Pass or fail the inspection + +Once the results are reviewed, the Data Steward approves or denies movement to the external egress bucket. They navigate back to the Airflow page and choose one of the following options: +- If DLP scan results are NOT approved, Data Steward fails the data export by running **Egress_2_Staging_Fail_inspection**. Once on the DAG page, follow the steps to trigger the DAG, as instructed above. The data will be fully deleted from staging, and only the archived copy will remain in the workspace. +- If DLP scan results ARE approved, Data Steward passes the data export by running **Egress_3_Staging_Pass_Inspection**. Once on the DAG page, follow the steps to trigger the DAG, as instructed above. The data will be transferred to the project’s external egress bucket, where the researchers will be able to access and share it. +After the final egress DAG completes successfully, the Data Steward should notify the researchers either a) that their data is available in the external egress bucket or b) that their data export was denied and why. + +## Moving Files to Export +You can use the gsutil cp command to copy data from your home directory to the Egress export folder in the workspace using the following steps. Use the gsutil ls command to see the list of folders in your workspace. Copy your file into the Egress folder, adding /export/yourfilename to the Egress folder path: +```sh +gsutil cp data_file.txt gs://egress_bucket_path/export/data_file.txt +``` +![cp from export via gsutils](./static/gsutils_cp_export.png) + +## Auto-Inspection +When files are added to the export folder, they are automatically scanned for sensitive data using the Data Loss Prevention API. This is the same tool that the Data Steward will use to examine your exported data and approve or deny the export, so you should review the results of auto-inspection carefully. Before notifying the Data Steward that an export is ready, make sure that the DLP inspection does not detect sensitive info, or that if it does, you are aware of the items it flags and can explain why they are false alarms. + +The DLP scan is automatically triggered by any new file in the export folder. It may take several minutes to run. When it is complete, a summary file will be written back to the “dlp” folder in the egress bucket. +```sh +gsutil ls gs://egress_bucket_path/dlp +``` +Within this folder, a folder is created for each exported file, and within that are dated summary reports for each version. +```sh +gsutil ls gs://egress_bucket_path/dlp/data_file.txt/ +``` +You should see a file of the format **dlp_results_YYYY-MM-DD-HHMMSS** corresponding to approximately when you added the file to the export folder. Note that the scan takes about a minute to pick up new files, and may behave oddly if you upload several versions very close together. + +To see the summary file contents, use the command: +```sh +gsutil cat gs://egress_bucket_path/dlp/data_file.txt/dlp_results_YYYY-MM-DD-HHMMSS +``` + +If sensitive information is detected, you will see it listed by type and count +![Type and count of sensitive info](./static/sensitive_info_type_count.png) + +If no sensitive information is detected, you will see a clean scan report. Double-check that the “Processed bytes” and “Total estimated bytes” approximately line up with the size of your file–if both values are 0 it is likely that there was an error in the scan. +![No sensitive info](./static/no_sensitive_info.png) + + + + + + + + diff --git a/docs/srde/02_user_guide/04_troubleshooting.mdx b/docs/srde/02_user_guide/04_troubleshooting.mdx new file mode 100644 index 00000000..7fa00683 --- /dev/null +++ b/docs/srde/02_user_guide/04_troubleshooting.mdx @@ -0,0 +1,3 @@ +# Troubleshooting + +Coming soon! diff --git a/docs/srde/02_user_guide/05_best_practices.mdx b/docs/srde/02_user_guide/05_best_practices.mdx new file mode 100644 index 00000000..a727dd49 --- /dev/null +++ b/docs/srde/02_user_guide/05_best_practices.mdx @@ -0,0 +1,22 @@ +# Best Practices + +## Shared Files +Each user on the research workspace has their own home directory, as well as access to the top-level **/shared** partition. + +### Shared data files +It is recommended to keep datasets under the /shared partition, especially if they are large. This is more efficient than each researcher making their own copy from the ingress bucket, and ensures all experiments are consistent with each other. + +### Shared code files +Code files should also be stored under the /shared partition whenever possible. You can use a local git repo to keep a version history of your codebase, and to avoid conflicts from multiple developers working on the same file at once. To create a repo, +```sh +cd /shared/code +git init +``` +And then, after adding or modifying files, +```sh +git add * +git commit -m “log message describing your change” +``` +The git repo, with its full version history, can be exported alongside your results for transparency and reproducibility. + + diff --git a/docs/srde/02_user_guide/_category_.json b/docs/srde/02_user_guide/_category_.json new file mode 100644 index 00000000..09c87314 --- /dev/null +++ b/docs/srde/02_user_guide/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "User Guide", +} diff --git a/docs/srde/02_user_guide/static/airflow_permissions.png b/docs/srde/02_user_guide/static/airflow_permissions.png new file mode 100644 index 00000000..0d37839f Binary files /dev/null and b/docs/srde/02_user_guide/static/airflow_permissions.png differ diff --git a/docs/srde/02_user_guide/static/auth_login.png b/docs/srde/02_user_guide/static/auth_login.png new file mode 100644 index 00000000..17ae1e94 Binary files /dev/null and b/docs/srde/02_user_guide/static/auth_login.png differ diff --git a/docs/srde/02_user_guide/static/bastion_instance.png b/docs/srde/02_user_guide/static/bastion_instance.png new file mode 100644 index 00000000..3f47289d Binary files /dev/null and b/docs/srde/02_user_guide/static/bastion_instance.png differ diff --git a/docs/srde/02_user_guide/static/big_query.png b/docs/srde/02_user_guide/static/big_query.png new file mode 100644 index 00000000..8987d29b Binary files /dev/null and b/docs/srde/02_user_guide/static/big_query.png differ diff --git a/docs/srde/02_user_guide/static/big_query_exporter_tab.png b/docs/srde/02_user_guide/static/big_query_exporter_tab.png new file mode 100644 index 00000000..f5a58837 Binary files /dev/null and b/docs/srde/02_user_guide/static/big_query_exporter_tab.png differ diff --git a/docs/srde/02_user_guide/static/big_query_scan_result.png b/docs/srde/02_user_guide/static/big_query_scan_result.png new file mode 100644 index 00000000..46987a2b Binary files /dev/null and b/docs/srde/02_user_guide/static/big_query_scan_result.png differ diff --git a/docs/srde/02_user_guide/static/cmd_prompt_gcp_login.png b/docs/srde/02_user_guide/static/cmd_prompt_gcp_login.png new file mode 100644 index 00000000..fec075e8 Binary files /dev/null and b/docs/srde/02_user_guide/static/cmd_prompt_gcp_login.png differ diff --git a/docs/srde/02_user_guide/static/cmd_prompt_py.png b/docs/srde/02_user_guide/static/cmd_prompt_py.png new file mode 100644 index 00000000..fdbbcf42 Binary files /dev/null and b/docs/srde/02_user_guide/static/cmd_prompt_py.png differ diff --git a/docs/srde/02_user_guide/static/cmd_prompt_python.png b/docs/srde/02_user_guide/static/cmd_prompt_python.png new file mode 100644 index 00000000..a4817170 Binary files /dev/null and b/docs/srde/02_user_guide/static/cmd_prompt_python.png differ diff --git a/docs/srde/02_user_guide/static/dag_status.png b/docs/srde/02_user_guide/static/dag_status.png new file mode 100644 index 00000000..db13de94 Binary files /dev/null and b/docs/srde/02_user_guide/static/dag_status.png differ diff --git a/docs/srde/02_user_guide/static/dag_trigger.png b/docs/srde/02_user_guide/static/dag_trigger.png new file mode 100644 index 00000000..63393647 Binary files /dev/null and b/docs/srde/02_user_guide/static/dag_trigger.png differ diff --git a/docs/srde/02_user_guide/static/data_egress_process_overview.png b/docs/srde/02_user_guide/static/data_egress_process_overview.png new file mode 100644 index 00000000..cb1ac824 Binary files /dev/null and b/docs/srde/02_user_guide/static/data_egress_process_overview.png differ diff --git a/docs/srde/02_user_guide/static/data_ingestion_process_overview.png b/docs/srde/02_user_guide/static/data_ingestion_process_overview.png new file mode 100644 index 00000000..352671cf Binary files /dev/null and b/docs/srde/02_user_guide/static/data_ingestion_process_overview.png differ diff --git a/docs/srde/02_user_guide/static/find_relevant_dag.png b/docs/srde/02_user_guide/static/find_relevant_dag.png new file mode 100644 index 00000000..ce46f6ea Binary files /dev/null and b/docs/srde/02_user_guide/static/find_relevant_dag.png differ diff --git a/docs/srde/02_user_guide/static/gcloud_windows_installer.png b/docs/srde/02_user_guide/static/gcloud_windows_installer.png new file mode 100644 index 00000000..8ecbf4fe Binary files /dev/null and b/docs/srde/02_user_guide/static/gcloud_windows_installer.png differ diff --git a/docs/srde/02_user_guide/static/gcp_cloud_storage_buckets.png b/docs/srde/02_user_guide/static/gcp_cloud_storage_buckets.png new file mode 100644 index 00000000..39fa626a Binary files /dev/null and b/docs/srde/02_user_guide/static/gcp_cloud_storage_buckets.png differ diff --git a/docs/srde/02_user_guide/static/gcp_copy_to_ingress_bucket.png b/docs/srde/02_user_guide/static/gcp_copy_to_ingress_bucket.png new file mode 100644 index 00000000..6445b707 Binary files /dev/null and b/docs/srde/02_user_guide/static/gcp_copy_to_ingress_bucket.png differ diff --git a/docs/srde/02_user_guide/static/gcp_staging_ingress_bucket.png b/docs/srde/02_user_guide/static/gcp_staging_ingress_bucket.png new file mode 100644 index 00000000..de3b0267 Binary files /dev/null and b/docs/srde/02_user_guide/static/gcp_staging_ingress_bucket.png differ diff --git a/docs/srde/02_user_guide/static/getting_into_bastion.png b/docs/srde/02_user_guide/static/getting_into_bastion.png new file mode 100644 index 00000000..8f714b9f Binary files /dev/null and b/docs/srde/02_user_guide/static/getting_into_bastion.png differ diff --git a/docs/srde/02_user_guide/static/gsutils_cp.png b/docs/srde/02_user_guide/static/gsutils_cp.png new file mode 100644 index 00000000..8aa90adc Binary files /dev/null and b/docs/srde/02_user_guide/static/gsutils_cp.png differ diff --git a/docs/srde/02_user_guide/static/gsutils_cp_egress.png b/docs/srde/02_user_guide/static/gsutils_cp_egress.png new file mode 100644 index 00000000..94da3c84 Binary files /dev/null and b/docs/srde/02_user_guide/static/gsutils_cp_egress.png differ diff --git a/docs/srde/02_user_guide/static/gsutils_cp_export.png b/docs/srde/02_user_guide/static/gsutils_cp_export.png new file mode 100644 index 00000000..eac0663c Binary files /dev/null and b/docs/srde/02_user_guide/static/gsutils_cp_export.png differ diff --git a/docs/srde/02_user_guide/static/gsutils_ls.png b/docs/srde/02_user_guide/static/gsutils_ls.png new file mode 100644 index 00000000..5f7ddf32 Binary files /dev/null and b/docs/srde/02_user_guide/static/gsutils_ls.png differ diff --git a/docs/srde/02_user_guide/static/gsutils_ls_on_your_workspace_path.png b/docs/srde/02_user_guide/static/gsutils_ls_on_your_workspace_path.png new file mode 100644 index 00000000..8f4b4097 Binary files /dev/null and b/docs/srde/02_user_guide/static/gsutils_ls_on_your_workspace_path.png differ diff --git a/docs/srde/02_user_guide/static/gsutils_ls_on_your_workspace_path_timestamped_folder.png b/docs/srde/02_user_guide/static/gsutils_ls_on_your_workspace_path_timestamped_folder.png new file mode 100644 index 00000000..1683d8dd Binary files /dev/null and b/docs/srde/02_user_guide/static/gsutils_ls_on_your_workspace_path_timestamped_folder.png differ diff --git a/docs/srde/02_user_guide/static/ls_dot_ssh.png b/docs/srde/02_user_guide/static/ls_dot_ssh.png new file mode 100644 index 00000000..456542c4 Binary files /dev/null and b/docs/srde/02_user_guide/static/ls_dot_ssh.png differ diff --git a/docs/srde/02_user_guide/static/mingw_eval_ssh.png b/docs/srde/02_user_guide/static/mingw_eval_ssh.png new file mode 100644 index 00000000..b19b7614 Binary files /dev/null and b/docs/srde/02_user_guide/static/mingw_eval_ssh.png differ diff --git a/docs/srde/02_user_guide/static/mingw_export_project_gcloud_login.png b/docs/srde/02_user_guide/static/mingw_export_project_gcloud_login.png new file mode 100644 index 00000000..aab0da6b Binary files /dev/null and b/docs/srde/02_user_guide/static/mingw_export_project_gcloud_login.png differ diff --git a/docs/srde/02_user_guide/static/no_sensitive_info.png b/docs/srde/02_user_guide/static/no_sensitive_info.png new file mode 100644 index 00000000..9798881f Binary files /dev/null and b/docs/srde/02_user_guide/static/no_sensitive_info.png differ diff --git a/docs/srde/02_user_guide/static/oauth_consent.png b/docs/srde/02_user_guide/static/oauth_consent.png new file mode 100644 index 00000000..a1cf6b7e Binary files /dev/null and b/docs/srde/02_user_guide/static/oauth_consent.png differ diff --git a/docs/srde/02_user_guide/static/oauth_consent_windows.png b/docs/srde/02_user_guide/static/oauth_consent_windows.png new file mode 100644 index 00000000..e21f207c Binary files /dev/null and b/docs/srde/02_user_guide/static/oauth_consent_windows.png differ diff --git a/docs/srde/02_user_guide/static/pagent_in_tray.png b/docs/srde/02_user_guide/static/pagent_in_tray.png new file mode 100644 index 00000000..7ce3ba5e Binary files /dev/null and b/docs/srde/02_user_guide/static/pagent_in_tray.png differ diff --git a/docs/srde/02_user_guide/static/pagent_properties_popout.png b/docs/srde/02_user_guide/static/pagent_properties_popout.png new file mode 100644 index 00000000..eb14a90f Binary files /dev/null and b/docs/srde/02_user_guide/static/pagent_properties_popout.png differ diff --git a/docs/srde/02_user_guide/static/pagent_windows.png b/docs/srde/02_user_guide/static/pagent_windows.png new file mode 100644 index 00000000..88f1f030 Binary files /dev/null and b/docs/srde/02_user_guide/static/pagent_windows.png differ diff --git a/docs/srde/02_user_guide/static/powershell_script.png b/docs/srde/02_user_guide/static/powershell_script.png new file mode 100644 index 00000000..de49f40a Binary files /dev/null and b/docs/srde/02_user_guide/static/powershell_script.png differ diff --git a/docs/srde/02_user_guide/static/ppk_add_key.png b/docs/srde/02_user_guide/static/ppk_add_key.png new file mode 100644 index 00000000..d4f9e143 Binary files /dev/null and b/docs/srde/02_user_guide/static/ppk_add_key.png differ diff --git a/docs/srde/02_user_guide/static/ppk_properties_trust.png b/docs/srde/02_user_guide/static/ppk_properties_trust.png new file mode 100644 index 00000000..da4b2b27 Binary files /dev/null and b/docs/srde/02_user_guide/static/ppk_properties_trust.png differ diff --git a/docs/srde/02_user_guide/static/push_using_airflow.png b/docs/srde/02_user_guide/static/push_using_airflow.png new file mode 100644 index 00000000..4c2948a8 Binary files /dev/null and b/docs/srde/02_user_guide/static/push_using_airflow.png differ diff --git a/docs/srde/02_user_guide/static/putty_bastion.png b/docs/srde/02_user_guide/static/putty_bastion.png new file mode 100644 index 00000000..c7c64dba Binary files /dev/null and b/docs/srde/02_user_guide/static/putty_bastion.png differ diff --git a/docs/srde/02_user_guide/static/putty_security_alert.png b/docs/srde/02_user_guide/static/putty_security_alert.png new file mode 100644 index 00000000..124eeef9 Binary files /dev/null and b/docs/srde/02_user_guide/static/putty_security_alert.png differ diff --git a/docs/srde/02_user_guide/static/putty_ssh_to_vm.png b/docs/srde/02_user_guide/static/putty_ssh_to_vm.png new file mode 100644 index 00000000..6ecc3db5 Binary files /dev/null and b/docs/srde/02_user_guide/static/putty_ssh_to_vm.png differ diff --git a/docs/srde/02_user_guide/static/select_project.png b/docs/srde/02_user_guide/static/select_project.png new file mode 100644 index 00000000..3d4a5a10 Binary files /dev/null and b/docs/srde/02_user_guide/static/select_project.png differ diff --git a/docs/srde/02_user_guide/static/sensitive_info_type_count.png b/docs/srde/02_user_guide/static/sensitive_info_type_count.png new file mode 100644 index 00000000..9f7f9595 Binary files /dev/null and b/docs/srde/02_user_guide/static/sensitive_info_type_count.png differ diff --git a/docs/srde/02_user_guide/static/ssh_in_browser.png b/docs/srde/02_user_guide/static/ssh_in_browser.png new file mode 100644 index 00000000..7a9f9990 Binary files /dev/null and b/docs/srde/02_user_guide/static/ssh_in_browser.png differ diff --git a/docs/srde/02_user_guide/static/vpc_basics.png b/docs/srde/02_user_guide/static/vpc_basics.png new file mode 100644 index 00000000..af6fe8bf Binary files /dev/null and b/docs/srde/02_user_guide/static/vpc_basics.png differ diff --git a/docs/srde/03_dlp/_category_.json b/docs/srde/03_dlp/_category_.json new file mode 100644 index 00000000..29cc5484 --- /dev/null +++ b/docs/srde/03_dlp/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "Data Loss Prevention", +} diff --git a/docs/srde/03_dlp/dlp.mdx b/docs/srde/03_dlp/dlp.mdx new file mode 100644 index 00000000..1f2990e6 --- /dev/null +++ b/docs/srde/03_dlp/dlp.mdx @@ -0,0 +1,69 @@ +# DLP Interpretation Guide + +Data Loss Prevention (DLP) is a tool provided by Google Cloud that automatically detects potentially sensitive information such as names, dates, social security numbers, credit card numbers, etc. As part of the SRDE egress process, we run a DLP scan to help the data steward more easily judge whether the data being exported is in compliance with the relevant data use agreement(s). The scan will flag any information it considers potentially sensitive, and it is up to the data steward to review the results and either approve the export or send it back to the researchers for further review. + +The DLP scan is triggered by the Egress_2 script as part of the [data egress process](../02_user_guide/03_data_transfers.mdx). Once it is generated, you can explore the table of flagged items in BigQuery. First, open a query tab as shown in the screenshot below, and then use SQL queries like the examples in this guide to check specific items and view overall statistics on the table. + +![DLP scan in BigQuery](./static/dlp_scan_big_query.png) + + +:::note + DLP is just a tool, and it is not infallible. Use your best judgment, and if you see results that look confusing, get in touch with the PI for clarification. +::: + +## Viewing results from the DLP report + +This query, the most basic, fetches the first 100 flagged items in the report. +```sql +SELECT + quote, + info_type.name, + info_type.sensitivity_score.score, + likelihood +FROM `your_table_name` LIMIT 100 +``` + +Each row of the report contains a great deal of metadata on where the potentially sensitive metadata was found, as well as metadata on the DLP scan itself, but here we select only the following four columns: +- **quote**: the span of text that was flagged as sensitive info +- **info_type.name**: the type of sensitive info +- **Info_type.sensitivity_score.score**: the sensitivity level (LOW, MODERATE, or HIGH) +- **likelihood**: the confidence with which the DLP tool has flagged the item (POSSIBLE, LIKELY, or VERY_LIKELY) + +The results should look something like this. As you can see, the same piece of text may be flagged multiple times with different types, depending on the results of DLP’s auto-detection algorithms. +![DLP query results](./static/dlp_query_results.png) + +To see more results, you can adjust the value of the `LIMIT` clause or remove it entirely. Alternatively, use some of the sample queries below to view targeted subsets of the data. + +## Sample Queries: selecting a subset of flagged items + +Select only high-sensitivity items +```sql +SELECT quote, info_type.name, info_type.sensitivity_score.score, likelihood +FROM `your_table_name` +WHERE info_type.sensitivity_score.score = "SENSITIVITY_HIGH" +LIMIT 100 +``` + +Select only items that are high-sensitivity and have a likelihood higher than “possible” +```sql +SELECT quote, info_type.name, info_type.sensitivity_score.score, likelihood +FROM `your_table_name` +WHERE info_type.sensitivity_score.score = "SENSITIVITY_HIGH" +AND likelihood != "POSSIBLE" +LIMIT 100 +``` + +Select all items, sorted by type +```sql +SELECT quote, info_type.name, info_type.sensitivity_score.score, likelihood +FROM `your_table_name` +ORDER BY info_type.name +``` + +Select all items of type PERSON_NAME, ordered alphabetically +```sql +SELECT quote, info_type.name, info_type.sensitivity_score.score, likelihood +FROM `your_table_name` +WHERE info_type.name = "PERSON_NAME" +ORDER BY quote +``` diff --git a/docs/srde/03_dlp/static/dlp_query_results.png b/docs/srde/03_dlp/static/dlp_query_results.png new file mode 100644 index 00000000..53d071cf Binary files /dev/null and b/docs/srde/03_dlp/static/dlp_query_results.png differ diff --git a/docs/srde/03_dlp/static/dlp_scan_big_query.png b/docs/srde/03_dlp/static/dlp_scan_big_query.png new file mode 100644 index 00000000..f5a58837 Binary files /dev/null and b/docs/srde/03_dlp/static/dlp_scan_big_query.png differ diff --git a/docs/srde/04_faq/01_basics.mdx b/docs/srde/04_faq/01_basics.mdx new file mode 100644 index 00000000..60bd052e --- /dev/null +++ b/docs/srde/04_faq/01_basics.mdx @@ -0,0 +1,33 @@ +# About SRDE, projects and getting started + +## What is the SRDE? +NYU Secure Research Data Environment (SRDE) is a centralized secure computing platform designed to support research projects that require storage, sharing and analysis of high risk datasets. The team provides researchers with consultations and resources to comply with security requirements of research grants and Data Use Agreements. SRDE resources intend to meet the security controls outlined in the NIST 800-171 to safeguard Controlled Unclassified Information (CUI). + + +:::info[Technical description] +Please refer to [our technical description here](https://docs.google.com/document/d/1DB2eHL2Y6kd5fQWAif1QBgHbDBui1bpR_pjF_hcJIzI/edit?tab=t.0) + +::: + +## Who can have an SRDE project? + +Access to SRDE is available to NYU researchers and sponsored members of their research team (i.e. co-investigators, research assistants, external collaborators). + +:::tip[IRB Waiver Requirement] +A project must be reviewed and approved (or receive a waiver) by the University's Institutional Review Board (IRB) in order to have an SRDE. + +::: + +## How do I sign up for an SRDE? +Fill out our intake form to provide more information about your project: [Secure Research Data Environment intake form](https://nyu.qualtrics.com/jfe/form/SV_3Vok9ax87Bxxdsy). Once we have received your form, the team will review the information and will contact you to schedule the consultation. + +## How much will the SRDE cost? +The cost is dependent on the needs of the project, such as size of the data and the type of machine needed for the analysis. Google Cloud has a calculator that will help estimate the costs: https://cloud.google.com/products/calculator + +## My data is not high risk, are there other options available? +There are several options available depending on the data risk classification and the needs of the project. There are resources provided by the university such as research project space, cloud computing, etc. You can check out many of these services on the [HPC Support Site](../../hpc/01_getting_started/01_intro.md). If you are unsure on how to proceed, a consultation with the SRDE team will help determine the best path forward. + +## What does the SRDE team need from me for the consultation? +To help get things started it would be beneficial to submit an [intake form](https://nyu.qualtrics.com/jfe/form/SV_3Vok9ax87Bxxdsy) with any related data governance documentation (files can be attached to the form), including, but not limited to, data use agreement, OSP/IRB documents, and project information. + + diff --git a/docs/srde/04_faq/02_env_roles.mdx b/docs/srde/04_faq/02_env_roles.mdx new file mode 100644 index 00000000..6517c3d5 --- /dev/null +++ b/docs/srde/04_faq/02_env_roles.mdx @@ -0,0 +1,19 @@ +# Environment and Roles + +## Who is the Data Steward? +The data steward is an individual who is responsible for the ingress and egress of the data. The data steward is usually an IT administrator of the school the PI is associated with, they should be familiar with data sets and classification and may even cosign the Data Use Agreement (DUA) associated with the project. + +:::warning[Data Steward role] +If there is no such person, the project PI will need to assign the role to someone who will NOT be analyzing the data in the SRDE since this role does not have access to the research workspace in the SRDE (to enforce separation of duties). The SRDE team will provide role-based training for the data steward. + +::: + +## Will other project users have access to my files? +Each user will have access to two drives within the SRDE workspace: home and scratch. The home drive is private and the scratch drive is shared. Any files for collaboration with other project team members on the workspace should be placed or copied over to the scratch drive. + + +## What kind of software is available on the SRDE? +Some statistical analysis software packages are preinstalled on the SRDE, such as Stata and MATLAB, other software is added on a case-by-case basis, dependent on the security and compatibility of the software with the SRDE. + +## How long will the data stay in the SRDE? +Project lifecycle will be determined between the PI and the SRDE Team during the intake interview. diff --git a/docs/srde/04_faq/03_using_srde.md b/docs/srde/04_faq/03_using_srde.md new file mode 100644 index 00000000..2f71e452 --- /dev/null +++ b/docs/srde/04_faq/03_using_srde.md @@ -0,0 +1,13 @@ +# Using the SRDE + +## Will I receive training on how to use the SRDE? +Absolutely! Once your SRDE is set up the SRDE team will schedule onboarding sessions for the research workspace users and a separate one for the data steward. In addition, all users will have access to the User Guide and SRDE support team for troubleshooting and guidance. + +## Why is the SRDE in a terminal, is there a screen layout? +At this time the SRDE is command-line only. We are working on an updated version with a graphical user interface (GUI) for projects with the need for it. + +## How do I export a file? +All egress is done by the Data Steward. In order to have a file exported, please follow instructions in the SRDE User Guide to place the file in the export folder in the research workspace, then alert your Data Steward that there is a file ready for export. + +## Can I upload my own (non-sensitive) files to the SRDE? +In order to maintain the integrity of the data management flow into and out of the SRDE all data going into the environment needs to go through the Data Steward. This ensures both that the Data Steward is aware of all external data entering or leaving the environment and can confirm compliance with the DUA, as well as providing a single path for audit logging. To have your files uploaded to the environment, you can provide them to the Data Steward who can then upload them to the ingress bucket for your retrieval. diff --git a/docs/srde/04_faq/_category_.json b/docs/srde/04_faq/_category_.json new file mode 100644 index 00000000..3806c875 --- /dev/null +++ b/docs/srde/04_faq/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "Frequently Asked Questions" +} diff --git a/docs/srde/05_support/01_support.md b/docs/srde/05_support/01_support.md new file mode 100644 index 00000000..2643db24 --- /dev/null +++ b/docs/srde/05_support/01_support.md @@ -0,0 +1,3 @@ +# Support + +Please email your questions to: srde-support@nyu.edu diff --git a/docs/srde/05_support/_category_.json b/docs/srde/05_support/_category_.json new file mode 100644 index 00000000..0a9a1eb6 --- /dev/null +++ b/docs/srde/05_support/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "Support", +} diff --git a/docusaurus.config.ts b/docusaurus.config.ts index 4b9e0c9f..c60ec7b3 100644 --- a/docusaurus.config.ts +++ b/docusaurus.config.ts @@ -145,7 +145,7 @@ const config: Config = { prism: { theme: prismThemes.oneLight, darkTheme: prismThemes.palenight, - additionalLanguages: ['bash', 'shell-session', 'lua', 'julia'], + additionalLanguages: ['bash', 'shell-session', 'lua', 'sql', 'julia'], }, } satisfies Preset.ThemeConfig, };