X.Org Developer's Conference 2019

Concordia University Conference Centre

Concordia University Conference Centre

1450 Guy St. Montreal, Quebec, Canada H3H 0A1
Mark Filion (Collabora), Daniel Vetter (Intel), Samuel Iglesias Gonsálvez (Igalia)

The X.Org Developer's Conference 2019 is the event for developers working on all things Open graphics (Linux kernel, Mesa, DRM, Wayland, X11, etc.).

Travel sponsorship request
XDC 2019 Registration
  • Abhishek Kharbanda
  • Adam Jackson
  • Aiden Hwang
  • Alistair Delva
  • Alvaro Soliverez
  • Alyssa Rosenzweig
  • Arcady Goldmints-Orlov
  • Arkadiusz Hiler
  • Bas Nieuwenhuizen
  • Ben Crocker
  • Ben Skeggs
  • Ben Widawsky
  • Benjamin Tissoires
  • Bill Kristiansen
  • Boris Brezillon
  • Brian Ho
  • Chad Versace
  • Chema Casanova
  • Christoph Haag
  • Connor Abbott
  • Daniel Schürmann
  • Daniel Stone
  • Daniel Vetter
  • Daniele Castagna
  • Daniele Ceraolo Spurio
  • David Ludovino
  • Drew Davenport
  • Drew DeVault
  • Dylan Baker
  • Eduardo Lima
  • Elie Tournier
  • Eric Anholt
  • Erico Nunes
  • Erik Faye-Lund
  • Fritz Koenig
  • Gil Dekel
  • Guillaume Champagne
  • Gwan-gyeong Mun
  • Harry Wentland
  • Heinrich Fink
  • Iago Toral
  • Ian Romanick
  • Ilja Friedel
  • Jagan Teki
  • Jake Edge
  • Jakob Bornecrantz
  • James Jones
  • Jason Ekstrand
  • Jean-François Dagenais
  • Jens Owen
  • Jesse Natalie
  • Joey Ferwerda
  • Joonas Lahtinen
  • Juan A. Suárez
  • Julian Bouzas
  • Jérôme Glisse
  • Jürgen Schneider
  • Karen Ghavam
  • Karol Herbst
  • Keith Packard
  • Kenneth Graunke
  • Kevin Brace
  • Kristian Hoegsberg Kristensen
  • Laurent Pinchart
  • Liviu Dudau
  • Louis-Francis Ratté-Boulianne
  • Lukasz Janyst
  • Lyude Paul
  • Mark Filion
  • Markus Ongyerth
  • Martin Peres
  • Miguel Casas-Sanchez
  • Mithun Veluri
  • Naseer Ahmed
  • Nicholas Kazlauskas
  • Nicolas Dufresne
  • Paul Kocialkowski
  • Peter Hutterer
  • Philipp Zabel
  • Preston Carpenter
  • Ricardo Grim Cabrita
  • Rob Clark
  • Rob Herring
  • Robert Foss
  • Robert Tarasov
  • Roman Gilg
  • Rosen Zhelev
  • Ryan Houdek
  • Sagar Ghuge
  • Sam Ravnborg
  • Sasha McIntosh
  • Scott Anderson
  • Sean Paul
  • Shayenne Moura
  • Simon Ser
  • Simon Zeni
  • Tomeu Vizoso
  • Vasily Khoruzhick
  • Zach Reizner
  • Zhan Liu
  • Zhenhai Chai
  • Łukasz Spintzyk
    • 08:30 18:30
      Main Track
      • 08:30
        Opening Session 20m
      • 09:00
        Zink: OpenGL on Vulkan 45m

        Zink is a work-in-progress Mesa Gallium driver that implements OpenGL on top of Vulkan. This talk will discuss why and how, and give an update on what's happened in Zink recently.

      • 09:55
        Introducing the Vulkan WSI Layer 20m

        3D graphics with new hardware designs and APIs (such as Vulkan) are evolving rapidly. At the same time, windowing systems evolve with new protocols, use-cases, formats, synchronization mechanisms and so on. To effectively support all this GPU drivers separate the implementation of Windowing System Integration (WSI) from the Core 3D rendering.

        In Vulkan, WSI is implemented through windowing-specific surface extensions and a swapchain implementation. However, through the Vulkan layer mechanism, we can naturally decouple the WSI code from the core GPU driver. This can make development simpler by allowing people to focus on either supporting new windowing systems and features or new GPU hardware drivers.

        By making use of the Vulkan specification with extensions as an interface between WSI code and rendering drivers, it enables more cross-vendor sharing. This also encourages more standardization in how drivers integrate with the OS and leads to more feature-rich Linux graphics stacks.

        Introducing the Vulkan WSI layer: the starting point for a driver-agnostic Vulkan swapchain implementation. We've open sourced a working layer that implements VK_EXT_headless_surface and VK_KHR_swapchain, as a starting point to develop a generic implementation for the different Linux windowing systems.

        We'll present the project and its current status, and open for discussion on how we can best collaborate on this important piece of the wider Linux graphics puzzle.


      • 10:15
        Coffee Break 30m
      • 10:45
        ACO, a new compiler backend for GCN GPUs 45m

        Radv (the radeon vulkan driver) has for a long time used LLVM as the
        shader compiler backend. However, LLVM has a number of issues which
        led us to develop an alternative shader compiler that strongly leans
        on the shared nir intermediate language. This new compiler is showing
        significant gains for compile time as well as runtime performance.

        We will talk about our pain points with LLVM and how ACO solves them,
        the overall design of ACO as well as the challenges we see and the
        plans we have for the future.

      • 11:40
        How to not write a back-end compiler 45m

        Compilers are hard and there are always a lot of design decisions involved in trying to come up with the right architecture to target any given piece of hardware. In this talk, Jason will go over some of the design decisions (especially the mistakes) that have been made by the Intel team as well as other established back-ends and how they have worked out in practice. These will be used as motivating examples to discuss current back-end compiler best practices and suggestions for developers working on new back-ends.

      • 12:25
        Lunch 1h 35m
      • 14:00
        Mediump support in Mesa 45m

        GPUs often provide half-float 16-bit registers for floating point calculations. Using these instead of full-precision 32-bit registers can often provide a significant performance benefit, particularly on embedded GPUs. The method used to expose these registers to applications in OpenGLES is that variables can be marked as mediump, meaning that the driver is allowed to use a lower precision for any operations involving these variables. The GLES spec allows for the lower precision to be optional so it is always valid to use a higher precision. Mesa currently implements the spec effectively by just ignoring the precision markers and always using full precision.

        This talk will present ongoing work at Igalia to implement a lowering pass to convert mediump operations to 16-bit float operations. The work is targetting the Freedreno driver but the resulting lowering pass may be applicable to other drivers too.

      • 14:55
        Implementing Optimizations In NIR 45m

        As more applications come to Linux and drivers move to NIR, the need to perform both application-specific and device-specific shader optimizations increases. Over the past year, numerous enhancements to existing optimizations and new optimization passes have been implemented. Tools and techniques developed from that experience will be presented. The emphasis will be to finding, diagnosing, and validating various kinds peephole optimizations passes and optimiztions for NIR's algebraic optimization pass.

      • 15:40
        Coffee Break 30m
      • 16:10
        Outreachy Internship Report: Refactoring backlight and spi helpers in drm/tinydrm 20m

        In this talk, I will briefly describe my contributions as an outreachy round 15 intern. Broadly, I worked on refactoring code in the tinydrm subsystem. Specifically, I refactored the backlight and the spi helper functions.

      • 16:35
        Introducing MESA_query_driver and adriconf 20m

        For EVoC'18, I've done a project(have a look at proposal here) implementing MESA_query_driver EGL extension and using that in adriconf for retrieving driver details on Wayland systems and flatpak packaging for adriconf.

        List of things for talk

        1. Explain MESA_query_driver(look at specs here)
        2. Introduce adriconf
        3. Details about adriconf flatpak
        4. Areas of development and contribution to adriconf

        Rob Clark(former board member) and I have an idea of adding adriconf to mesa sub-projects and we are discussing on it.

      • 17:00
        VKMS Improvements 45m

        Earlier setting up the mode (screen resolution, color depth, and refresh rate) for display screen was done through userspace programs, thus exposing system calls and breaking up while changing different modes. In order to configure the mode setting, the dri-devel community developed the Kernel Mode Setting (KMS) code, and API. Virtual Kernel Mode Setting(VKMS) is a newly added device driver in the Linux Kernel that provides virtual KMS implementation for headless systems and DRM test coverage. It will be valuable for running X or Wayland on a headless machine enabling the use of GPU. At the moment that VKMS gets mature enough, it will be used to run i-g-t test cases and to automate userspace testing. In this talk, Shayenne and Mamta will talk about a journey of how they started contributing in Linux Kernel and later ended up debugging and adding features for VKMS driver. They will also explain how VKMS fits in the workflow of a graphics pipeline and also share their experiences as newcomers in the community. Shayenne Moura worked to solve bugs related to vblank issues (VKMS was not synchronized with vblank timestamps) and add i-g-t tests to verify well working, while Mamta contributed to add features like alpha blending and support for overlay planes in this virtual driver. Also, she would like to highlight some more applications of VKMS.

    • 17:55 18:25
      Lightning talks: Demos / Lightning talks

      Lightning talks get schedule as time permits throughout the assigned time block. Please be ready!

    • 08:30 19:00
      Main Track
      • 08:30
        Opening Session 10m
      • 08:45
        Khronos Update & AMA 20m

        The Khronos Group industry consortium has created and evolved many key graphics open standards such as OpenGL and Vulkan – and strongly supports their use in open source platforms. Come and hear about the latest Khronos initiatives and roadmap updates, and participate in an AMA session with Neil Trevett, Khronos President.

      • 09:10
        KWin now and tomorrow 20m

        This talk is meant to give an overview on where we stand with KDE's KWin as an X11 window manager and a Wayland compositor, who is currently working on it and on what tasks now and in the near future.

        The topics will broadly be:

        • The KWin team
        • Technical topics:
        • Abstracting KWin's internals
        • Updating legacy code / cleaning house
        • Wayland multi device and threaded rendering
        • Learning individually and improving our team coherence
        • KWin as part of the X.Org / freedesktop.org / Wayland community
      • 09:35
        Monado: Open Source Augmented & Virtual Reality 45m

        VR took off for the consumer with the release of Oculus consumer hardware. But the hardware lacked open source drivers and Linux support in general. The OpenHMD project was created to solve this issue. The consumer VR space has now grown from a kickstarter campaign into a large industry. But this growth has its down sides, multiple companies have their own APIs competing. Luckily these companies have agreed to work on a single API under the Khronos umbrella. Now that provisional spec of OpenXR has been released, the Monado project has been launched, a follow up to OpenHMD.

        In this talk, Jakob will cover Monado and Khronos' OpenXR standard, give an overview about the current state of open source VR and what lies ahead.

        Jakob works for Collabora with graphics and virtual reality, XR Lead at Collabora and a member of the OpenXR working group. He has worked with Linux graphics since 2006, starting with Tungsten Graphics and moving into VMware. In 2013 he along with a friend started the OpenHMD project, then in the spring of 2019 was involved in launching both Monado and OpenXR at GDC.

      • 10:20
        Coffee Break 25m
      • 10:45
        Improving frame timing accuracy in Mesa, DRM and X 45m

        Smooth animation of graphics requires that the presentation timing of
        each frame be controlled accurately by the application so that the
        contents can be correctly adjusted for the display time. Controlling
        the presentation timing involves predicting when rendering of the
        frame will be complete and using the display API to request that the
        frame be displayed at a specific time.

        Predicting the time it will take to render a frame usually draws upon
        historical frame rendering times along with application
        heuristics. Once drawn, the display API is given the job of presenting
        the content to the user at the specified time. A failure of either of
        these two mechanisms will result in content being delayed, and a
        stuttering or judder artifact made visible to the user.

        Historical timing information includes both the time taken to render a
        frame with the GPU along with the actual time each frame was displayed
        to the user. Ideally, the application will also be given some estimate
        of how long it will take to ready the frame for display once the
        presentation request has been delivered to the display system. With
        these three pieces of information (application GPU time, actual
        display time, presentation overhead), the application can estimate
        when its next frame will be ready for display.

        The following work is underway to provide applications this
        information and to improve the accuracy of display presentation timing
        in the Linux environment.

        1. Vulkan GOOGLE_display_timing extension implementation in
          Mesa. This offers applications some fairly straightforward
          measurements that can help predict when a frame timing target
          might be missed.

        2. Heuristics in the X Composite and Present extension
          implementations to improve accuracy of reported display times to
          Present-using applications

        3. Additions to Composite that replace the above heuristics with
          precise timing information for Compositing managers modified to
          support these additions.

        4. Semi-automatic compositing support added to the Composite
          extension which allow in-server compositing of some windows to
          reduce variability in the display process.

        This presentation will describe the above work and demonstrate the
        benefits of the resulting code.

      • 11:40
        A case study on frame presentation from user space via KMS 45m

        Traditionally, an application had very little control about when a rendered
        frame is actually going to be displayed. For games, this uncertainty can cause
        animation stuttering [0]. A Vulkan prototype extension was added to address
        this problem [1].

        XR (AR/VR) applications similarly need accurate knowledge of presentation
        timestamps in order to predict the head-pose for the time a frame will be
        displayed. Here, inaccuracies lead to registration errors (i.e. mismatch
        between virtual and real head pose), causing users to get motion sickness or to
        experience swimming of virtual content.

        XR compositors also optimize for latency. An already-rendered frame is
        corrected for the most recent head-pose, right before its scan-out to display.
        The time between the correction of a frame and its presentation determines the
        resulting latency. In order to keep this value as low as possible, a compositor
        needs to control how late a frame can be scheduled in order to make the desired
        presentation time.

        The Atomic KMS API is the lowest-level cross-driver API for programming
        display controllers on Linux. With KMS, buffers can be submitted directly from
        user space for display, circumventing traditional presentation layers of
        graphics APIs (e.g. EGL surfaces or Vulkan swapchains). This way, applications
        gain exclusive access to the display engine for maximum control. Collabora and
        DAQRI recently published the kms-quads sample project to demonstrate this
        technique [2]. While working on this, we identified several issues of the KMS
        API that make it challenging to implement tightly scheduled buffer
        presentations as required by the use cases mentioned above. For instance, which
        part of the scan-out signal timestamps provided by KMS refer to is not well
        defined. Furthermore, it is unclear what the latest point in time is that a
        buffer can be submitted to make a specific presentation deadline (see [3] for
        related discussion). The advent of adaptive-sync support in KMS makes this
        topic even more complex.

        This talk should serve as an introduction and summary to user-driven
        presentation timing via KMS, based on last year's experience of implementing a
        KMS-based AR compositor at DAQRI. We will discuss the use-case, its
        implementation and demonstrate open problems of this topic, hopefully leading
        to further discussion at the venue.

        [0] https://medium.com/@alen.ladavac/the-elusive-frame-timing-168f899aec92
        [1] https://lists.freedesktop.org/archives/dri-devel/2018-February/165319.html
        [2] https://gitlab.freedesktop.org/daniels/kms-quads
        [3] https://github.com/mikesart/gpuvis/issues/30

      • 12:25
        Lunch 1h 35m
      • 14:00
        Nouveau Generic Allocator Implementation in GBM 45m

        I will discuss the results of an effort to implement the concepts discussed in my prior generic Unix device memory allocator talks as extensions to the existing GBM API with a nouveau driver backend. Based on prior feedback, DRM format modifiers will be used in place of what were previously called "capabilities". I will attempt to demonstrate the feasibility of this less-invasive approach, and demonstrate the performance/efficiency compared to existing GBM usage models.

      • 14:55
        Simple DRM display drivers 20m


        There is a large number of different display controllers implemented in SOC design and there continue to pop up new variants several times per year. Linux support is in high demand and with the fbdev devices deprecated and no longer accepted in the kernel new display controllers needs a DRM display driver to have Linux support.
        The DRM subsystem has over time introduced a lot of helpers so creating a display driver for a simple display controller can be done with limited lines of code and without having to dive too deep into the details of the DRM subsystem.

        This talk explains how the helpers in the DRM subsystem allowed the creation of a display driver for the Atmel AT91SAM CPU. The different helpers will be explained with sufficient examples allowing one to use this as a reference when implementing a new display driver.
        It is demonstrated how introducing the DRM helpers in existing drivers
        can greatly reduce the line-count / complexity.
        The talk will as part of the above topics introduce concepts that
        was previously unknown to a newbie in the DRM land.
        The target group are developers planning to work with CPUs with a simple
        display controller that has only a single plane and no GPU.
        Everyone else interested in display controllers and small display
        drivers will also find the talk relevant.


        The talk will introduce the framebuffer – and what a fourcc code is
        for the framebuffer.
        The relation to a fbdev driver that works with depth and how this
        maps to fourcc codes will be explained.
        The differences between addfb() and addfb2() will be covered.

        The pipeline from framebuffer, crtc, encoder, (bridge),
        connector to the panel will be covered.
        This includes a good intro to the simple_pipe helpers and examples
        of their practical use.
        The generic fbdev emulation will be explained, and it will be
        demonstrated how simple it is to use.

        The bochs driver will be used to demonstrate how introducing the
        common helpers can reduce line-count dramatically – and thus to
        give the listener an idea for the help provided and how easy
        it is these days to write a DRM driver.
        The helpers for handling buffer object for a simple display driver will be covered:
        - The helpers for simple coherent DMA able memory (cma)
        - The helpers for video RAM (vram)
        - The helpers for handling shmem may be covered – to be decided

        As time permits the talk may also present some of the possibilities in the panel sub-system and how to add new panels.

        Again as time permits the talk will introduce tinydrm drivers.

      • 15:15
        Coffee Break 30m
      • 15:45
        Lima driver status update 45m

        Lima is an open source graphics driver which supports Mali 400/450 embedded GPUs from ARM via reverse engineering.
        Recently, after many years since the beginning of the reverse engineering efforts on these devices, the lima driver has been finally upstreamed in both mesa and linux kernel counterparts.
        This talk will cover some information about the target GPUs and implementation details of the driver.
        It will also include a history of the work done so far to make it possible and the recent efforts which lead to its inclusion in upstream.
        Lima is a project under development and many features are still missing for it to become a complete graphics driver.
        The aim of this talk is to discuss about its current state and how it is expected to improve going forward.

      • 16:40
        Freesync, Adaptive Sync & VRR 45m

        DP adaptive sync, a feature supported by AMD under the marketing name of Freesync, primarily allows for smoother gameplay but also enables other use cases, like idle desktop powersaving and 24Hz video playback. In this talk we'll describe what adaptive sync is, how it works, and will speak to different use cases and how they might be implemented. The presentation will cover design and code snippets to show how the feature is enabled.

      • 17:35
        Enabling 8K displays - A story of 33M pixels, 2 CRTCs and no Tears! 20m

        Ever seen a true 33 million pixel 8K display? The maximum display link bandwidth available with DisplayPort’s highest bit rate of 8.1 Gbps/lane limits the resolution to 5K@60 over a single DP connector. Hence the only true 8K displays allowing upto full 60 frames per second are the tiled displays enabled using 2 DP connectors running at their highest bit rate across 2 CRTCs in the display graphics pipeline. Enabling tiled displays across dual CRTC dual connector configuration has always resulted in screen tearing artifacts due to synchronization issues between the two tiles and their vertical blanking interrupts.

        Transcoder port synchronization is a new feature supported on Intel’s Linux Graphics kernel driver for platforms starting Gen 11 that fixes the tearing issue on tiled displays. In this talk Manasi will explain how port synchronization is plumbed into the existing atomic KMS implementation. She will deep dive into the DRM and i915 code changes required to handle tiled atomic modesets through master and slave CRTCs lockstep mode operation to enable tearfree 8K display output across 2 CRTCs and 2 ports in the graphics pipeline. She will conclude by showing the 8K display results using Intel GPU Tools test suite.

      • 18:00
        X.Org Foundation Board of Directors Meeting 1h
    • 08:45 17:45
      Main Track
      • 08:45
        Opening Session 10m
      • 09:00
        A whirlwind tour through the input stack development efforts 45m

        The input stack comprises many pieces. libinput, libevdev, libratbag, libwacom and even a few components that don't start with "lib". The kernel or X for example, also somewhat of importance.

        This talk is a tour of recently added features and features currently in development. Examples include libinput user devices, the difficulty of supporting high-resolution wheel scrolling in Wayland, how we've painted ourselves in a corner by using the hwdb in libinput and then tore out the whole room and replaced it with a nicer one, and whacky devices like the totem that will probably never work as they do in the advertising videos. This talk includes blue-sky features full of optimism and may include some features that have no such optimism left and are now merely a pile of discarded branches, soaked with tears.

      • 09:55
        Edging closer to the hardware for kernel CI on input devices 20m

        Avoiding regressions in the input stack is hard. Ideally, every commit and the ones before it are tested against every possible device. But the universe hasn't seen it fit to provide us with an army of people to test devices, infinite resources, or even a lot of time to spare. Pity, that, really. But we do have a computer, so that's a start.

        In this talk we show how we moved from basically no regression tests 10 years ago in the input stack, to a state where every commit gets tested against variety of devices. We show how we can do CI on the kernel side, and how we can do CI on the user space side.

      • 10:15
        Coffee Break 30m
      • 10:45
        Witchcraft Secrets from a Reverse Engineer 45m

        Thousands of moons ago, an obscure magical art developed by the Magic Resistance beneath the disquieting tides of the Magilectric Revolution: reverse-engineering... Some refuse to acknowledge its existence. Some mistakenly believe it a form of witchcraft or dark magic, entranced by its binary spells and shadowy hexes. The outer world rarely catches more than a glimpse of these powerful mages, for each youngling learns from an elder reverse-engineer, under a non-existent disclosure agreement, knowledged memcpy'd straight to their brains. A write-only memory spell ensured total secrecy of the art form... until today. Learn the exploits of a young mage and her memory restoration spell. Registers and secrets spilled in this magilectric adventure.

      • 11:40
        Everything Wrong With FPGAs 45m

        FPGAs and their less generic cousin, specialized accelerators have come onto the scene in a way that GPUs did 20 or so years ago. Indeed new interfaces have cropped up to support them in a fashion resembling early GPUs, and some vendors have even opted to try to use DRM/GEM APIs for their devices.

        This talk will start with a background of what FPGAs are, how they work, and where they're currently used. Because the audience is primarily people with graphics/display background, I will make sure to cover this well. It will then discuss the existing software ecosystem around the various usage models with some comparison of the existing tools. If time permits, I will provide a demo comparing open tools vs. closed ones.

        The goal of the talk is to find people who have lived through DRI1 days and are able to help contribute their experience (and coding) toward improving the stack for the future accelerators.

        Currently, my focus is on helping to improve a fully open source toolchain, and so I will spend more time on that then API utilization.

      • 12:25
        Lunch 1h 35m
      • 14:00
        DRM/KMS for Android 20m

        Update on DRM/KMS driver validation for the Android Open Source Project (AOSP).

        • Status update on adding IGT to AOSP, Android VTS.
        • Pixel DRM/KMS status update.
        • Generic Kernel Image (GKI.)
      • 14:25
        Display hardware testing with Chamelium 20m


        Hardware testing can help catch regressions in driver code. One way to test is
        to perform manual checks, however this is error-prone and doesn't scale. Another
        approach is to build an automatic test suite (e.g. IGT), which calls special
        driver interfaces or mocks resources to check whether they are doing the right
        thing (for instance, checksums to check that the right frame is produced).

        However it's not possible to test all features with driver helpers: link
        training, hot-plug detection, DisplayPort Multi-Stream Transport and Display
        Screen Compression are examples of hard-to-test features. Moreover, some
        regressions can be missed because they happen at a lower level than the driver

        For this reason, a board emulating a real screen has been developed: Chamelium.
        It can be used to test display features from the KMS client to the screen and
        make sure the whole software and hardware stack works properly. An overview of
        the Chamelium board features (and limitations) will be presented.


        1. Why:
        2. Automated testing is essential to merging patches with confidence
        3. It's not possible to test all display-related features without real
        4. Some features can only be tested by adding knobs in the kernel (e.g. by
          forcing an EDID on a disconnected connector)
        5. The tests aren't checking that the feature works correctly with a real
        6. How:
        7. Google has developed a Chamelium board that emulates a screen
        8. Chamelium features
        9. Chamelium support in IGT
        10. Example of a Chamelium test (quick demo?)
        11. Current limitations and possible improvements
        12. Features supported by the receiver chips but not exposed by the Chamelium
        13. Features not supported (would require a new board)
      • 14:50
        State of the X.org 20m

        Your secretary's yearly report on the state of the X.org Foundation. Expect updates on the freedeskoptop.org merger, internship and student programs, XDC, and more!

      • 15:10
        Coffee Break 35m
      • 17:25
        Closing session 20m
    • 15:45 17:25
      Lightning talks

      Lightning talks get schedule as time permits throughout the assigned time block. Please be ready!

Your browser is out of date!

Update your browser to view this website correctly. Update my browser now