SLAC AI Capabilities

Need help with Cloud or AI services?

graphic - solid bar

Explore Artificial Intelligence (AI) at SLAC

  • Discover available AI tools.
  • Learn best practices for safe and responsible use.
  • Find ways to enhance your daily work, research, and technical projects. 

From beginners to advanced users, you'll find resources tailored to your needs and expertise level.

Tools

Available AI Solutions

SLAC employees are encouraged to explore these AI tools designed to enhance workflows, boost efficiency, and drive innovation across all missions. No prior AI experience required.

Benefits

SLAC employees can effectively enhance their work, accelerate progress, and contribute to increased effectiveness and innovation across all SLAC missions, from scientific discovery to essential operational support.

Responsible AI FAQs

These FAQs address the responsible use of AI tools and models at Stanford, drawing from Responsible AI at Stanford and Guidance for responsible AI use at SLAC. Always refer to these resources for the most up-to-date information and policies.

Stanford's AI Advisory Committee has published comprehensive guidelines applicable to SLAC. The report advocates for a balanced approach with flexible, adaptive guidelines rather than rigid policies, emphasizing human oversight, ethical use, alignment with institutional values, and robust data quality. At SLAC, this guidance is applied through the acceptable use policy, as well as data‑classification rules and defined lists of approved and not‑approved AI tools.

Understanding Generative AI

Generative AI uses statistical algorithms to create text, images, videos, audio, and 3D models in response to prompts. While offering many benefits, it also presents risks.

Generative AI can enable the user to improve efficiency and creativity, but also poses risks to data security and privacy, and may produce inaccurate, copyright-infringing, patent-infringing, or biased results.

  • Agentic AI systems take actions using tools (such as reading or editing files, running commands, or calling APIs) to achieve a goal.
  • Generative AI acts as a “consultant” (it provides advice or drafts content), while agentic AI acts like an “employee” (it takes action). A human must always remain responsible for oversight and outcomes.

Safety and Security Measures

  • Always follow SLAC data‑classification rules (“The Golden Rule”).
  • Public data may be used in public AI tools.
  • Internal or proprietary SLAC data must NEVER be entered into public AI tools.
  • Sensitive information may only be used with SLAC‑approved, SLAC‑managed AI platforms.
    Most public AI tools learn from user prompts, meaning information may no longer be private once submitted.
  • Ensure all AI tools are configured to only connect to SLAC-approved AI frameworks.

Each AI service has a defined list of approved and unapproved information for that model, depending on the specifics of how that AI service is implemented.  Some AI services are approved for all categories of SLAC information, up to and including CUI information, while others are only approved for low-risk, non-sensitive information.  Be sure to check the restrictions on the exact tool you are using.

Remember, some AI tools will also send information beyond just your prompt to a third-party service.  Be very careful about the configuration or selection of tools, as they may result in you accidentally sending sensitive information to an unapproved AI tool.

Remember that some public AI services and tools are completely prohibited for use at SLAC, including for public information.  Review the Banned Hardware and Software list for details.

Contact SLAC IT or the Cyber Security Team for IT security questions, or SLAC IT for privacy questions. If considering a third-party generative AI tool, start a discussion with SLAC IT.

If your work is significantly influenced by an AI platform, be transparent about its use and cite it appropriately. This helps build trust and allows your audience to understand the role of AI in your work.

Validate that any generated content is not subject to pre-existing copyright or patents, as many AI models have been demonstrated to output material protected by copyright or patent by a third party.

Approved and Not‑Approved AI Tools

Only AI tools explicitly approved by SLAC IT may be used, and only within their approved scope.
Approved tools include:

  • Microsoft Copilot (on SLAC‑managed systems)
  • GitHub Copilot (SLAC‑provided license, SLAC‑managed repositories only)
  • Stanford AI Playground (for general, non‑sensitive information)
  • AWS Bedrock (within SLAC‑managed cloud environments)
  • SLAC AI Accelerator
  • Local‑only AI models running on authorized SLAC infrastructure

Public cloud AI tools are not approved for SLAC information, including ChatGPT, Anthropic/Claude (cloud service), Gemini, DeepSeek, Grammarly, Grok, and similar tools.

Local Models and Advanced Use

Yes. Local‑only models are generally permitted when run on SLAC‑authorized systems.
Sensitive information—including CUI—may only be used if the underlying system is authorized for that sensitivity level.
Network‑aware models must be explicitly reviewed and approved by SLAC Cyber Security before use with non‑public data.

Agentic AI and Automation

No. Fully autonomous actions—such as submitting pull requests, merging code, or deploying changes—are strictly prohibited.
AI may propose changes, but humans must review, test, approve, and manually apply them. You remain legally and professionally responsible for all actions taken under your credentials.

Risk Factors

Key risks include compromising sensitive data (through input prompts used to train models), receiving inaccurate results ("hallucinations"), and generating insecure or infringing code.

  • AI hallucinations are instances where an AI provides false or incorrect information in a convincing manner, without providing skepticism, fact-checking, or citations.
  • Remember, large language models are built with affirmation bias.  This bias means that the output is skewed toward providing a result, even if the tool is instructed not to guess.  The bias is the most common, but not the only reason why wildly incorrect answers, known as hallucinations, are outputted regularly.

Never blindly trust AI output.

  • Verify facts and results
  • Review code for security and correctness
  • Ensure license and policy compliance
  • Human review is always required before production use

Best Practices & Resources

Do NOT input sensitive data unless the tool is explicitly authorized to handle it, be mindful of data privacy, opt out of data sharing for AI training when possible, obtain informed consent when using AI to interact with users, and transparently cite AI use in your work. Avoid risky third-party bots and integrations in meetings. Refer to relevant discipline-specific policies and guidelines.

Stanford provides various resources, including the Generative AI Policy Guidance, Minimum Security Standards, Minimum Privacy Standards, and relevant policies like HIPAA, GDPR, and FERPA. Additional resources are linked on the original page, including training materials and ethical considerations. Contact ISO or UPO for specific questions or assistance regarding Stanford resources.

As instructed by CS-200 (or CS-100 for non-employees), reach out to the Cyber Security Team or the IT Service Desk immediately to report the potential incident. 

Do not proceed. Contact SLAC IT or the Cyber Security Team for guidance before using the tool.