SLAC AI Capabilities

Need help with Cloud or AI services?

graphic - solid bar

Explore Artificial Intelligence (AI) at SLAC

  • Discover available AI tools.
  • Learn best practices for safe and responsible use.
  • Find ways to enhance your daily work, research, and technical projects. 

From beginners to advanced users, you'll find resources tailored to your needs and expertise level.

Tools

Available AI Solutions

SLAC employees are encouraged to explore these AI tools designed to enhance workflows, boost efficiency, and drive innovation across all mission. No prior AI experience required.

Benefits

SLAC employees can effectively enhance their work, accelerate progress, and contribute to increased effectiveness and innovation across all SLAC missions, from scientific discovery to essential operational support.

Responsible AI FAQs

These FAQs address the responsible use of AI tools and models at Stanford, drawing from Responsible AI at Stanford and Guidance for responsible AI use at SLAC. Always refer to these resources for the most up-to-date information and policies.

Generative AI uses algorithms to create text, images, videos, audio, and 3D models in response to prompts. While offering many benefits, it also presents risks.

Generative AI can significantly improve efficiency and creativity but also poses risks to data security and privacy, and may produce inaccurate or biased results.

Be aware that data inputted into third-party AI systems is transmitted and stored on external servers outside Stanford's direct control. This introduces risks of data compromise or loss. Avoid inputting sensitive data (Moderate or High- Risk Data as defined by Stanford), disable saved history options where possible, and carefully consider what you input.

Sensitive data includes, but is not limited to,: home addresses, passport numbers, personal health information, passwords, financial data, intellectual property, CUI, ITAR data, proprietary source code, and information protected under NDAs.

Contact the Information Security Office (ISO) for IT security questions or the University Privacy Office (UPO) for privacy questions. If considering a third-party generative AI tool, start a discussion with ISO.

If your work is significantly influenced by an AI platform, be transparent about its use and cite it appropriately. This helps build trust and allows your audience to understand the role of AI in your work.

Key risks include compromising sensitive data (through input prompts used to train models), receiving inaccurate results ("hallucinations"), and generating insecure or infringing code.

AI hallucinations are instances where an AI provides false or incorrect information in a convincing manner, without providing skepticism, fact-checking, or citations.

Avoid inputting sensitive data, be mindful of data privacy, opt out of data sharing for AI training when possible, obtain informed consent when using AI to interact with users, and transparently cite AI use in your work. Avoid risky third-party bots and integrations in meetings. Refer to relevant discipline-specific policies and guidelines.

Stanford provides various resources, including the Generative AI Policy Guidance, Minimum Security Standards, Minimum Privacy Standards, and relevant policies like HIPAA, GDPR, and FERPA. Additional resources are linked on the original page, including training materials and ethical considerations. Contact ISO or UPO for specific questions or assistance.

Report the incident immediately following your institution's established incident reporting procedures.

General AI FAQs

AI tools  offer a wide range of possibilities depending on the specific models available. You can use them for creative tasks like writing stories, poems, or code; for practical tasks such as translating languages, summarizing text, or analyzing data; and even for exploring more abstract concepts and generating novel ideas. The possibilities are constantly expanding as new AI models are developed.

AI models can perform many tasks, including but not limited to: text generation (stories, articles, code, scripts), language translation, text summarization, question answering, image generation from text descriptions, image classification and analysis, and data analysis to find patterns and insights.  The specific capabilities depend on the model and the platform.

Yes, AI can be a valuable tool for many tasks. For work, it can help with writing reports, summarizing emails, translating documents, or analyzing data. For studies, it can help with research, summarizing complex texts, or generating practice questions. For personal projects, it can help with creative writing, generating ideas, or creating visualizations.

AI opens up many creative avenues.  You can write songs, poems, or scripts; generate unique images; create interactive stories; design logos; experiment with different writing styles; or even compose music. The possibilities are limited only by your imagination and the capabilities of the AI model.

Absolutely! AI can be a powerful educational tool. You can use it to summarize complex topics, get answers to questions, translate materials into your native language, generate practice quizzes or exercises, or even create personalized learning plans.  It can make learning more accessible and engaging.

Yes, AI models have limitations. They are not perfect and can sometimes produce inaccurate, nonsensical, or biased results. They are also limited by their training data and may struggle with tasks outside the scope of that data.  It's important to critically evaluate the AI's output and not blindly trust it.  They also generally lack common sense and real-world understanding.