Computing Resources

This page lists a summary of Computing Resources provided by Computer Science IT Services.

TheÌýComputer Science IT Services group consists of CS systems engineers who work with the CS Computing Committee and OIT to support the technology needs of faculty, staff, and students.

Questions? Please contact us by sending an email to cscihelp@colorado.eduÌýwith your university-provided email address.
This will create a case in OIT's ServiceNow system and assign it to our group to serve as formal tracking for assistance.

Below are the services that CSÌýIT Services officially recognizes and supports. Please contact us if you would like us to explore a new service or application you are interested in to add to this list.

If urgent assistance is needed, please contact us directlyÌýon theÌý, or viaÌý.

Computer Science IT Services SLA

Hours of Coverage, Response Times & Escalation

Dedicated support hours of operation are 9:00AMÌýto 6:00PMÌýMonday-Friday for requests and incidents.
Weekend support is limited to incidents only, with a response time within 4 hours.

These hours of operation for the "Requests" categoryÌýexclude university holidays and official closures.
You can contact CSÌýIT Services via OIT'sÌýServiceNow system. Please email cscihelp@colorado.edu.

ServiceRequestsIncidentsAfterhours / Weekend Incidents
Managed Linux Instances2 business days2 hours4 hours
JupyterHub (CSEL Coding)2 business days2 hours4 hours
OpenStack (CS Cloud Platform)2 business days2 hours4 hours
Cloud Object Storage (CS Red Hat Ceph S3)2 business days2 hours4 hours
VDI Course Environment (CS vSphere)4 business days2 hours4 hours
Moodle LTI Activities to Canvas4 business days2 hours4 hours
CS Legacy Faculty Home4 hours2 hours4 hours
CS Legacy Student CSEL Sites2 business days8 hoursn/a
URL Redirector Service2 business day2 hours4 hours

Ìý

Computer Science Service Alerts

Check the current status of CSÌýservices

Ìý

Managed Instances are virtual servers that operate out of the east campus datacenter.


The definition of "managed" in this context means that your instance has a standardized configuration as its baseÌýperformed by our configuration management platform. This includesÌýauthentication and authorization (via OIT IAM ), networking, automated patching, backups, routine authenticated vulnerability scanning from , etc...Ìý

When a long term server is required for hosting, etc...Ìýthis is the recommended option. Completing the following form will create a GREQ with us which you can track inÌý.

Ìý Request a Managed Cloud Instance

Should an unmanaged instance be desired instead for short term use or experimentation, please view the OpenStack Cloud Computing Platform service below to spin up unmanaged servers.

OpenStack is an open source cloud computing infrastructure solution for in-house clouds. It functions similar to cloud platforms such as AWS, but comes with no cost (other than electricity).


The cloud platform provides the self-service ability to spin up a variety of operating systems and attach them to the ¶¶ÒõÂÃÐÐÉä Boulder Science Network. All active ¶¶ÒõÂÃÐÐÉä Boulder Employees will automatically be joined to a Demo project and can spin up Generation 2Ìýinstances.

This computing resource is provided free of charge, and is hosted with university resources. It is intended to only be used for university related buisness. Please review theÌýAcceptable Use of ¶¶ÒõÂÃÐÐÉä Boulder's IT ResourcesÌýpage for more information. OIS has full access to this platform.

*Students must be part of a project setup by a faculty, student faculty or staff member in order to spin up cloud instances.

Ìý Request an OpenStackÌýPrivate Project

ÌýÌýÌýÌýÌý

OpenStack Capabilities Reference

General Purpose
This is a balanced instance type that should provide proportional resources to most workloads.ÌýThey are powered by Intel Xeon® E5-2667v2ÌýIvy Bridge ProcessorsÌýonÌýDDR3.

FlavorvCPUMem (GiB)Network Performance (Gbps)Restrictions
m3.nano20.5Up to 10None
m3.micro21Up to 10None
m3.small22Up to 10None
m3.medium24Up to 10None
m3.large28Up to 10None
m3.xlarge416Up to 10None
m3.2xlarge832Up to 10None

Compute Optimized
This instance type has a higher processor ratio for more workloads more intensive towards CPU. They are powered by Intel Xeon® E5-2667v2ÌýIvy Bridge ProcessorsÌýonÌýDDR3.

FlavorvCPUMem (GiB)Network Performance (Gbps)Restrictions
c3.large24Up to 10None
c3.xlarge48Up to 10None
c3.2xlarge816Up to 10None
c3.4xlarge1632Up to 10None

General Purpose
This is a balanced instance type that should provide proportional resources to most workloads.ÌýThey are powered by Intel Xeon® E5-2667v3ÌýHaswellÌýProcessorsÌýonÌýDDR4.

FlavorvCPUMem (GiB)Network Performance (Gbps)Restrictions
m4.nano20.5Up to 25CSCI affiliation
m4.micro21Up to 25CSCI affiliation
m4.small22Up to 25CSCI affiliation
m4.medium24Up to 25CSCI affiliation
m4.large28Up to 25CSCI affiliation
m4.xlarge416Up to 25CSCI affiliation
m4.2xlarge832Up to 25CSCI affiliation

Compute Optimized
This instance type has a higher processor ratio for more workloads more intensive towards CPU. They are powered by Intel Xeon® E5-2667v3 HaswellÌýProcessorsÌýonÌýDDR4.

FlavorvCPUMem (GiB)Network Performance (Gbps)Restrictions
c4.large24Up to 25CSCI affiliation
c4.xlarge48Up to 25CSCI affiliation
c4.2xlarge816Up to 25CSCI affiliation
c4.4xlarge1632Up to 25CSCI affiliation

General Purpose
This is a balanced instance type that should provide proportional resources to most workloads.ÌýThey are powered byÌýmodern Intel Xeon® Gold 6226R Cascade Lake Scalable Processors with the new Intel AVX-512 instruction setÌýonÌýDDR4.

FlavorvCPUMem (GiB)Network Performance (Gbps)Restrictions
m5.nano20.5Up to 25CSCI affiliation, pre-approval
m5.micro21Up to 25CSCI affiliation, pre-approval
m5.small22Up to 25CSCI affiliation, pre-approval
m5.medium24Up to 25CSCI affiliation, pre-approval
m5.large28Up to 25CSCI affiliation, pre-approval
m5.xlarge416Up to 25CSCI affiliation, pre-approval
m5.2xlarge832Up to 25CSCI affiliation, pre-approval

Compute Optimized
This instance type has a higher processor ratio for more workloads more intensive towards CPU. They are powered byÌýmodern Intel Xeon® Gold 6226R Cascade Lake Scalable Processors with the new Intel AVX-512 instruction setÌýonÌýDDR4

FlavorvCPUMem (GiB)Network Performance (Gbps)Restrictions
c5.large24Up to 25CSCI affiliation, pre-approval
c5.xlarge48Up to 25CSCI affiliation, pre-approval
c5.2xlarge816Up to 25CSCI affiliation, pre-approval
c5.4xlarge1632Up to 25CSCI affiliation, pre-approval

The storage technology utilized for OpenStack is Ceph, a distributed cloud object storage solution which is extremely scalable and resilient. CephÌýis the upstream of Red Hat Ceph Storage.

Volume TypeDescriptionComment
ceph-gp2General Purpose SSDDistributed pool of enterprise flash media
ceph-gp1General Purpose SSDDeprecated, no longer an option for new instances
ceph-st1Throughput Optimized HDDDistributed pool of enterprise magnetic media
ceph-ct1Cache Tiered StorageDistributed pool of flash media, backed by a pool of magnetic media
ceph-sc1Cold HDDDistributed pool of slow/older magnetic media

Cloud images contain the cloud-init package, which is responsible for items such as SSH public key injection. This is the only way to login to a Linux image booted for the first time. The default login account is set by the cloud image maintainers. For example, the Debian cloud team maintains the below debian image with the default user of "debian", and Red Hat sets it as "cloud-user".

Based on the image selection, initial SSH (RDP if Windows) will need to be performed with this user. It can then be replaced or deleted if desired. Make sure the new account works prior or a lockout will occur.

Cloud ImageLogin Account with SSH Public Key Injection
Red Hat Enterprise Linux 7cloud-user
Red Hat Enterprise Linux 8cloud-user
CentOS 7centos
CentOS 8 Streamcentos
Rocky Linux 8rocky
Debian 10debian
Ubuntu Distributionsubuntu
OpenSUSE 15 JeOSopensuse
Fedora Distributionsfedora
FreeBSD 13freebsd
Windows Server 2019 Core / DesktopAdministrator
Windows 10 EnterpriseWindows

The availability zones indicate their physical locations on Campus. This would mainly affect network performance, so it is recommended that machines that require low-latency communication with each-other be put in similar AZ's

For example "ucb-east-1a" indicates it is located on east campus, and "1a" indicates a particular cross cabled rack pair in SPSCv2. "1b" would indicate another rack pair in the same room, while "2a" would indicate SPSCv1. In this example these zones would all be considered "close-enough" to each-other for networking or storage purposes.

Having an instance in "ucb-main-1a" with storage in "ucb-east-1b" for example may negatively affect instance performance.

Availability ZoneNotes
ucb-east-1aA rack pair (LACP cross cable) in SPSCv2
ucb-east-1bA rack pair (LACP cross cable) in SPSCv2
ucb-east-2aA rack in SPSCv1
ucb-main-1aThis is planned but not yet implemented

When configuring networks for private projects, it is important to get a birdseye view of private (openstack-restricted) networking, and attaching to "real" (campus or public) networks.

Network TypeDescription
External VLANAn external (campus / public) network, a project level sdn router must be attached to it and Floating IP's are allocated from the external network.
Provider External VLANAn external (campus / public) network, instances attach to it directly. Not typical. Floating IP's cannot be used in this case.
Project VXLANA private project-based network only defined in OpenStack. When creating these it is recommended to use Campus DNS.


It is typical that only external networks are presented to a new project. Two networks (scinet-internal) and (scinet-external) will always be available, but a project will typically only have a router connected to 1 of these provider networks.

If a specific network is desired for a project, it must be present on the UCB science network and defined in the campus networks file in HIPPO.ÌýWhen requesting a network, it can also be determined if machines should attach to an external network directly vs going through an openstack private network w/Floating IP.Ìý

Department Provided Software Licensing

As of July 1st, 2021 you must be enrolled in a course requiring VMWare. Your instructor will then enroll you for IT Academy with instructions to download VMWare products. We can no longer accept individual requests outside this scope.

Once your instructor has created your account; you will receive login details and can request software in the below portal. You must use your IdentiKey based email (identikey@colorado.edu). This will be a separate password as this service is not Federated with ¶¶ÒõÂÃÐÐÉä Boulder's Identity Provider at this time.

Piazza is a forum-based LMS with moderated anonymous posting capabilities for discussion and classroom engagement.

A department-level Piazza license is available to CSCI, CSPB, and CYBR courses.
If features of the paid model are desired, please send us a message via cscihelp@colorado.edu.
This will license all courses you teach in Piazza.

To get started, you will need an instructor Piazza account which can be created via Canvas.

  1. Login to Canvas and Navigate to your Course
  2. Click "Settings", and open the "Navigation Tab"
  3. Enable Piazza, which will now be in Course Navigation.
  4. Create your Piazza course in Canvas

Application Services

It is generally recommend for CSCI Faculty to have their pages on WebExpress.


We however understand some needs for a self-managed space with php as well, which can be done at with a per-user webdirectory home on the CSCI Home Server. If a CMSÌýis desired, it is typical that a database will also need to accompany it. Please contact us for access to that as well.


Faculty can setup a self-service personal website with php, located at https://www.cs.colorado.edu/~IdentiKey
Contact us if a custom URL is desired. To begin, ssh into CSCI Home at home.cs.colorado.edu and create a public_html directory in the root of your home directory.

# create a public_html directory in your home directory root
mkdir -p ~/public_html
# Add html or php content
echo "hello world" > ~/public_html/index.html

If you receive an unauthorized or permission denied error when visiting your site,Ìýmake sure the web server has permission to read this directory and that contexts are correct.

# Repair Permissions
chmod o+x ~; chmod o+rx ~/public_html
# Restore SELinux Context
restorecon -Rv ~/public_html

CSCI hosts a redirector service at , which is responsibleÌýfor simple URL redirects with rewrite. The destination can be any internal or external URL.


This is good for legacy URL's that should be redirected elsewhere without dealing with CNAMES and TLS SANs.

Examples of current sites handled by this service:Ìý
->
->Ìý/irt/mfm/

Any colorado.edu address can be redirected anywhere, please send us a message to request a new redirect.

The official LMS at ¶¶ÒõÂÃÐÐÉä Boulder is Canvas. All CSCI course are recommended to operate on Canvas as of Fall 2020.ÌýWe however also recognize the need for certian Moodle activities that do not have a Canvas analog, such as the CodeRunner plugin.


We have worked with OIT ATAP to find a solution that embeds any Moodle activity into Canvas with gradebook and roster sync. For more information please view the following course.

A databaseÌýhost is available for faculty who require databases for teaching or coursework purposes. This host is part of the Education Lab Remote Access set of machines specificaly for database excersizes.

It is a standard managed instance and operates MySQL 8Ìýon Red Hat Enterprise Linux 7.

All active CSCI affiliated IdentiKeys can SSH into this host with the ¶¶ÒõÂÃÐÐÉä Boulder VPN but for security purposes MySQL does not use PAM/SSSD. Database accounts are to be managed individually by the Faculty member.

The host is located at elra-sql.cs.colorado.edu

JupyterHub server for Jupyter notebooks provides a web-based environment within a per-userÌýsandboxed pod. Several environments are available, such a web Visual Studio IDE.

This operates on a clustered container platform from the east campus datacenter. It is open to CEAS users.

Faculty can request a custom container image built for their course. Message us at cscihelp@colorado.edu

Additional Resources

While optional, Computer Science IT Services recommends university owned computersÌýbe managed by . This includes professional support for end user machines including managed patching, software, troubleshooting, printers, backups, etc...

Faculty and StaffÌýcurrently opted into DDS can visit theÌý to directly create a ServiceNow case with their group. This is the recommended method to contact DDS.

To opt into DDS, please send a message to Kyle GiacominiÌýfor a consulation.
Additional information can be found on the page.

Several CSCI services require the Cisco VPN (ie, always required for SSH and RDP).
Please view the following OIT page for information on setup.

Access to most CSCI services are managed by Grouper, the OIT Enterprise Access Management Service. This is the interface used to control access to various services such as SSH, Linux Sudoers, RDP, OpenStack Projects, etc...

Generally, when a service is requested from CSCI IT Services, we will also provide a set of grouper groups for said service for you to manage. This can range from source of record groups to inividual IdentiKeys.

Please view the following OIT page for more information.

If MFA is requested for a CSCI service, enrollmentÌýinto OIT IAM Duo is required.

Duo can protect services such as SSH, RDP, and Federated SPs. Note that if you request Duo for your service, users not enrolled in Duo will not be able to access yourÌýservice.

ÌýÌý

Please visit the OIT Software Catalog for products available to ¶¶ÒõÂÃÐÐÉä Boulder affiliates. This includes additional software such as MathWorks MatLab and Wolfram Mathematica.