view doc/manual.tex @ 1836:276fa06428ba

Ignore polymorphism in JavaScript calls to custom FFI functions, allowing a kind of simple dynamic typing (unsafe, of course)
author Adam Chlipala <adam@chlipala.net>
date Tue, 11 Dec 2012 15:58:23 -0500
parents be0c4e2e488a
children 2c5e6f78560c
line wrap: on
line source
\documentclass{article}
\usepackage{fullpage,amsmath,amssymb,proof,url}
\usepackage[T1]{fontenc}
\usepackage{ae,aecompl}
\newcommand{\cd}[1]{\texttt{#1}}
\newcommand{\mt}[1]{\mathsf{#1}}

\newcommand{\rc}{+ \hspace{-.075in} + \;}
\newcommand{\rcut}{\; \texttt{--} \;}
\newcommand{\rcutM}{\; \texttt{---} \;}

\begin{document}

\title{The Ur/Web Manual}
\author{Adam Chlipala}

\maketitle

\tableofcontents


\section{Introduction}

\emph{Ur} is a programming language designed to introduce richer type system features into functional programming in the tradition of ML and Haskell.  Ur is functional, pure, statically typed, and strict.  Ur supports a powerful kind of \emph{metaprogramming} based on \emph{type-level computation with type-level records}.

\emph{Ur/Web} is Ur plus a special standard library and associated rules for parsing and optimization.  Ur/Web supports construction of dynamic web applications backed by SQL databases.  The signature of the standard library is such that well-typed Ur/Web programs ``don't go wrong'' in a very broad sense.  Not only do they not crash during particular page generations, but they also may not:

\begin{itemize}
\item Suffer from any kinds of code-injection attacks
\item Return invalid HTML
\item Contain dead intra-application links
\item Have mismatches between HTML forms and the fields expected by their handlers
\item Include client-side code that makes incorrect assumptions about the ``AJAX''-style services that the remote web server provides
\item Attempt invalid SQL queries
\item Use improper marshaling or unmarshaling in communication with SQL databases or between browsers and web servers
\end{itemize}

This type safety is just the foundation of the Ur/Web methodology.  It is also possible to use metaprogramming to build significant application pieces by analysis of type structure.  For instance, the demo includes an ML-style functor for building an admin interface for an arbitrary SQL table.  The type system guarantees that the admin interface sub-application that comes out will always be free of the above-listed bugs, no matter which well-typed table description is given as input.

The Ur/Web compiler also produces very efficient object code that does not use garbage collection.  These compiled programs will often be even more efficient than what most programmers would bother to write in C.  The compiler also generates JavaScript versions of client-side code, with no need to write those parts of applications in a different language.

\medskip

The official web site for Ur is:
\begin{center}
  \url{http://www.impredicative.com/ur/}
\end{center}


\section{Installation}

If you are lucky, then the following standard command sequence will suffice for installation, in a directory to which you have unpacked the latest distribution tarball.

\begin{verbatim}
./configure
make
sudo make install
\end{verbatim}

Some other packages must be installed for the above to work.  At a minimum, you need a standard UNIX shell, with standard UNIX tools like sed and GCC (or an alternate C compiler) in your execution path; MLton, the whole-program optimizing compiler for Standard ML; and the development files for the OpenSSL C library.  As of this writing, in the ``testing'' version of Debian Linux, this command will install the more uncommon of these dependencies:
\begin{verbatim}
apt-get install mlton libssl-dev
\end{verbatim}

To build programs that access SQL databases, you also need one of these client libraries for supported backends.
\begin{verbatim}
apt-get install libpq-dev libmysqlclient15-dev libsqlite3-dev
\end{verbatim}

It is also possible to access the modules of the Ur/Web compiler interactively, within Standard ML of New Jersey.  To install the prerequisites in Debian testing:
\begin{verbatim}
apt-get install smlnj libsmlnj-smlnj ml-yacc ml-lpt
\end{verbatim}

To begin an interactive session with the Ur compiler modules, run \texttt{make smlnj}, and then, from within an \texttt{sml} session, run \texttt{CM.make "src/urweb.cm";}.  The \texttt{Compiler} module is the main entry point.

To run an SQL-backed application with a backend besides SQLite, you will probably want to install one of these servers.

\begin{verbatim}
apt-get install postgresql-8.4 mysql-server-5.1
\end{verbatim}

To use the Emacs mode, you must have a modern Emacs installed.  We assume that you already know how to do this, if you're in the business of looking for an Emacs mode.  The demo generation facility of the compiler will also call out to Emacs to syntax-highlight code, and that process depends on the \texttt{htmlize} module, which can be installed in Debian testing via:

\begin{verbatim}
apt-get install emacs-goodies-el
\end{verbatim}

If you don't want to install the Emacs mode, run \texttt{./configure} with the argument \texttt{--without-emacs}.

Even with the right packages installed, configuration and building might fail to work.  After you run \texttt{./configure}, you will see the values of some named environment variables printed.  You may need to adjust these values to get proper installation for your system.  To change a value, store your preferred alternative in the corresponding UNIX environment variable, before running \texttt{./configure}.  For instance, here is how to change the list of extra arguments that the Ur/Web compiler will pass to the C compiler and linker on every invocation.  Some older GCC versions need this setting to mask a bug in function inlining.

\begin{verbatim}
CCARGS=-fno-inline ./configure
\end{verbatim}

Since the author is still getting a handle on the GNU Autotools that provide the build system, you may need to do some further work to get started, especially in environments with significant differences from Linux (where most testing is done).  The variables \texttt{PGHEADER}, \texttt{MSHEADER}, and \texttt{SQHEADER} may be used to set the proper C header files to include for the development libraries of PostgreSQL, MySQL, and SQLite, respectively.  To get libpq to link, one OS X user reported setting \texttt{CCARGS="-I/opt/local/include -L/opt/local/lib/postgresql84"}, after creating a symbolic link with \texttt{ln -s /opt/local/include/postgresql84 /opt/local/include/postgresql}.

The Emacs mode can be set to autoload by adding the following to your \texttt{.emacs} file.

\begin{verbatim}
(add-to-list 'load-path "/usr/local/share/emacs/site-lisp/urweb-mode")
(load "urweb-mode-startup")
\end{verbatim}

Change the path in the first line if you chose a different Emacs installation path during configuration.


\section{Command-Line Compiler}

\subsection{\label{cl}Project Files}

The basic inputs to the \texttt{urweb} compiler are project files, which have the extension \texttt{.urp}.  Here is a sample \texttt{.urp} file.

\begin{verbatim}
database dbname=test
sql crud1.sql

crud
crud1
\end{verbatim}

The \texttt{database} line gives the database information string to pass to libpq.  In this case, the string only says to connect to a local database named \texttt{test}.

The \texttt{sql} line asks for an SQL source file to be generated, giving the commands to run to create the tables and sequences that this application expects to find.  After building this \texttt{.urp} file, the following commands could be used to initialize the database, assuming that the current UNIX user exists as a Postgres user with database creation privileges:

\begin{verbatim}
createdb test
psql -f crud1.sql test
\end{verbatim}

A blank line separates the named directives from a list of modules to include in the project.  Any line may contain a shell-script-style comment, where any suffix of a line starting at a hash character \texttt{\#} is ignored.

For each entry \texttt{M} in the module list, the file \texttt{M.urs} is included in the project if it exists, and the file \texttt{M.ur} must exist and is always included.

Here is the complete list of directive forms.  ``FFI'' stands for ``foreign function interface,'' Ur's facility for interaction between Ur programs and C and JavaScript libraries.
\begin{itemize}
\item \texttt{[allow|deny] [url|mime|requestHeader|responseHeader|env] PATTERN} registers a rule governing which URLs, MIME types, HTTP request headers, HTTP response headers, or environment variable names are allowed to appear explicitly in this application.  The first such rule to match a name determines the verdict.  If \texttt{PATTERN} ends in \texttt{*}, it is interpreted as a prefix rule.  Otherwise, a string must match it exactly.
\item \texttt{alwaysInline PATH} requests that every call to the referenced function be inlined.  Section \ref{structure} explains how functions are assigned path strings.
\item \texttt{benignEffectful Module.ident} registers an FFI function or transaction as having side effects.  The optimizer avoids removing, moving, or duplicating calls to such functions.  Every effectful FFI function must be registered, or the optimizer may make invalid transformations.  This version of the \texttt{effectful} directive registers that this function only has side effects that remain local to a single page generation.
\item \texttt{clientOnly Module.ident} registers an FFI function or transaction that may only be run in client browsers.
\item \texttt{clientToServer Module.ident} adds FFI type \texttt{Module.ident} to the list of types that are OK to marshal from clients to servers.  Values like XML trees and SQL queries are hard to marshal without introducing expensive validity checks, so it's easier to ensure that the server never trusts clients to send such values.  The file \texttt{include/urweb.h} shows examples of the C support functions that are required of any type that may be marshalled.  These include \texttt{attrify}, \texttt{urlify}, and \texttt{unurlify} functions.
\item \texttt{coreInline TREESIZE} sets how many nodes the AST of a function definition may have before the optimizer stops trying hard to inline calls to that function.  (This is one of two options for one of two intermediate languages within the compiler.)
\item \texttt{database DBSTRING} sets the string to pass to libpq to open a database connection.
\item \texttt{debug} saves some intermediate C files, which is mostly useful to help in debugging the compiler itself.
\item \texttt{effectful Module.ident} registers an FFI function or transaction as having side effects.  The optimizer avoids removing, moving, or duplicating calls to such functions.  Every effectful FFI function must be registered, or the optimizer may make invalid transformations.  (Note that merely assigning a function a \texttt{transaction}-based type does not mark it as effectful in this way!)
\item \texttt{exe FILENAME} sets the filename to which to write the output executable.  The default for file \texttt{P.urp} is \texttt{P.exe}.  
\item \texttt{ffi FILENAME} reads the file \texttt{FILENAME.urs} to determine the interface to a new FFI module.  The name of the module is calculated from \texttt{FILENAME} in the same way as for normal source files.  See the files \texttt{include/urweb.h} and \texttt{src/c/urweb.c} for examples of C headers and implementations for FFI modules.  In general, every type or value \texttt{Module.ident} becomes \texttt{uw\_Module\_ident} in C.
\item \texttt{include FILENAME} adds \texttt{FILENAME} to the list of files to be \texttt{\#include}d in C sources.  This is most useful for interfacing with new FFI modules.
\item \texttt{jsFunc Module.ident=name} gives the JavaScript name of an FFI value.
\item \texttt{library FILENAME} parses \texttt{FILENAME.urp} and merges its contents with the rest of the current file's contents.  If \texttt{FILENAME.urp} doesn't exist, the compiler also tries \texttt{FILENAME/lib.urp}.
\item \texttt{limit class num} sets a resource usage limit for generated applications.  The limit \texttt{class} will be set to the non-negative integer \texttt{num}.  The classes are:
  \begin{itemize}
  \item \texttt{cleanup}: maximum number of cleanup operations (e.g., entries recording the need to deallocate certain temporary objects) that may be active at once per request
  \item \texttt{database}: maximum size of database files (currently only used by SQLite)
  \item \texttt{deltas}: maximum number of messages sendable in a single request handler with \texttt{Basis.send}
  \item \texttt{globals}: maximum number of global variables that FFI libraries may set in a single request context
  \item \texttt{headers}: maximum size (in bytes) of per-request buffer used to hold HTTP headers for generated pages
  \item \texttt{heap}: maximum size (in bytes) of per-request heap for dynamically allocated data
  \item \texttt{inputs}: maximum number of top-level form fields per request
  \item \texttt{messages}: maximum size (in bytes) of per-request buffer used to hold a single outgoing message sent with \texttt{Basis.send}
  \item \texttt{page}: maximum size (in bytes) of per-request buffer used to hold HTML content of generated pages
  \item \texttt{script}: maximum size (in bytes) of per-request buffer used to hold JavaScript content of generated pages
  \item \texttt{subinputs}: maximum number of form fields per request, excluding top-level fields
  \item \texttt{time}: maximum running time of a single page request, in units of approximately 0.1 seconds
  \item \texttt{transactionals}: maximum number of custom transactional actions (e.g., sending an e-mail) that may be run in a single page generation
  \end{itemize}
\item \texttt{link FILENAME} adds \texttt{FILENAME} to the list of files to be passed to the linker at the end of compilation.  This is most useful for importing extra libraries needed by new FFI modules.
\item \texttt{linker CMD} sets \texttt{CMD} as the command line prefix to use for linking C object files.  The command line will be completed with a space-separated list of \texttt{.o} and \texttt{.a} files, \texttt{-L} and \texttt{-l} flags, and finally with a \texttt{-o} flag to set the location where the executable should be written.
\item \texttt{minHeap NUMBYTES} sets the initial size for thread-local heaps used in handling requests.  These heaps grow automatically as needed (up to any maximum set with \texttt{limit}), but each regrow requires restarting the request handling process.
\item \texttt{monoInline TREESIZE} sets how many nodes the AST of a function definition may have before the optimizer stops trying hard to inline calls to that function.  (This is one of two options for one of two intermediate languages within the compiler.)
\item \texttt{noXsrfProtection URIPREFIX} turns off automatic cross-site request forgery protection for the page handler identified by the given URI prefix.  This will avoid checking cryptographic signatures on cookies, which is generally a reasonable idea for some pages, such as login pages that are going to discard all old cookie values, anyway.
\item \texttt{onError Module.var} changes the handling of fatal application errors.  Instead of displaying a default, ugly error 500 page, the error page will be generated by calling function \texttt{Module.var} on a piece of XML representing the error message.  The error handler should have type $\mt{xbody} \to \mt{transaction} \; \mt{page}$.  Note that the error handler \emph{cannot} be in the application's main module, since that would register it as explicitly callable via URLs.
\item \texttt{path NAME=VALUE} creates a mapping from \texttt{NAME} to \texttt{VALUE}.  This mapping may be used at the beginnings of filesystem paths given to various other configuration directives.  A path like \texttt{\$NAME/rest} is expanded to \texttt{VALUE/rest}.  There is an initial mapping from the empty name (for paths like \texttt{\$/list}) to the directory where the Ur/Web standard library is installed.  If you accept the default \texttt{configure} options, this directory is \texttt{/usr/local/lib/urweb/ur}.
\item \texttt{prefix PREFIX} sets the prefix included before every URI within the generated application.  The default is \texttt{/}.
\item \texttt{profile} generates an executable that may be used with gprof.
\item \texttt{rewrite KIND FROM TO} gives a rule for rewriting canonical module paths.  For instance, the canonical path of a page may be \texttt{Mod1.Mod2.mypage}, while you would rather the page were accessed via a URL containing only \texttt{page}.  The directive \texttt{rewrite url Mod1/Mod2/mypage page} would accomplish that.  The possible values of \texttt{KIND} determine which kinds of objects are affected.  The kind \texttt{all} matches any object, and \texttt{url} matches page URLs.  The kinds \texttt{table}, \texttt{sequence}, and \texttt{view} match those sorts of SQL entities, and \texttt{relation} matches any of those three.  \texttt{cookie} matches HTTP cookies, and \texttt{style} matches CSS class names.  If \texttt{FROM} ends in \texttt{/*}, it is interpreted as a prefix matching rule, and rewriting occurs by replacing only the appropriate prefix of a path with \texttt{TO}.  The \texttt{TO} field may be left empty to express the idea of deleting a prefix.  For instance, \texttt{rewrite url Main/*} will strip all \texttt{Main/} prefixes from URLs.  While the actual external names of relations and styles have parts separated by underscores instead of slashes, all rewrite rules must be written in terms of slashes.  An optional suffix of \cd{[-]} for a \cd{rewrite} directive asks to additionally replace all \cd{\_} characters with \cd{-} characters, which can be handy for, e.g., interfacing with an off-the-shelf CSS library that prefers hyphens over underscores.
\item \texttt{safeGet URI} asks to allow the page handler assigned this canonical URI prefix to cause persistent side effects, even if accessed via an HTTP \cd{GET} request.
\item \texttt{script URL} adds \texttt{URL} to the list of extra JavaScript files to be included at the beginning of any page that uses JavaScript.  This is most useful for importing JavaScript versions of functions found in new FFI modules.
\item \texttt{serverOnly Module.ident} registers an FFI function or transaction that may only be run on the server.
\item \texttt{sigfile PATH} sets a path where your application should look for a key to use in cryptographic signing.  This is used to prevent cross-site request forgery attacks for any form handler that both reads a cookie and creates side effects.  If the referenced file doesn't exist, an application will create it and read its saved data on future invocations.  You can also initialize the file manually with any contents at least 16 bytes long; the first 16 bytes will be treated as the key.
\item \texttt{sql FILENAME} sets where to write an SQL file with the commands to create the expected database schema.  The default is not to create such a file.
\item \texttt{timeFormat FMT} accepts a time format string, as processed by the POSIX C function \texttt{strftime()}.  This controls the default rendering of $\mt{time}$ values, via the $\mt{show}$ instance for $\mt{time}$.
\item \texttt{timeout N} sets to \texttt{N} seconds the amount of time that the generated server will wait after the last contact from a client before determining that that client has exited the application.  Clients that remain active will take the timeout setting into account in determining how often to ping the server, so it only makes sense to set a high timeout to cope with browser and network delays and failures.  Higher timeouts can lead to more unnecessary client information taking up memory on the server.  The timeout goes unused by any page that doesn't involve the \texttt{recv} function, since the server only needs to store per-client information for clients that receive asynchronous messages.
\end{itemize}


\subsection{Building an Application}

To compile project \texttt{P.urp}, simply run
\begin{verbatim}
urweb P
\end{verbatim}
The output executable is a standalone web server.  Run it with the command-line argument \texttt{-h} to see which options it takes.  If the project file lists a database, the web server will attempt to connect to that database on startup.  See Section \ref{structure} for an explanation of the URI mapping convention, which determines how each page of your application may be accessed via URLs.

To time how long the different compiler phases run, without generating an executable, run
\begin{verbatim}
urweb -timing P
\end{verbatim}

To stop the compilation process after type-checking, run
\begin{verbatim}
urweb -tc P
\end{verbatim}
It is often worthwhile to run \cd{urweb} in this mode, because later phases of compilation can take significantly longer than type-checking alone, and the type checker catches many errors that would traditionally be found through debugging a running application.

A related option is \cd{-dumpTypes}, which, as long as parsing succeeds, outputs to stdout a summary of the kinds of all identifiers declared with \cd{con} and the types of all identifiers declared with \cd{val} or \cd{val rec}.  This information is dumped even if there are errors during type inference.  Compiler error messages go to stderr, not stdout, so it is easy to distinguish the two kinds of output programmatically.  A refined version of this option is \cd{-dumpTypesOnError}, which only has an effect when there are compilation errors.

It may be useful to combine another option \cd{-unifyMore} with \cd{-dumpTypes}.  Ur/Web type inference proceeds in a series of stages, where the first is standard Hindley-Milner type inference as in ML, and the later phases add more complex aspects.  By default, an error detected in one phase cuts off the execution of later phases.  However, the later phases might still determine more values of unification variables.  These value choices might be ``misguided,'' since earlier phases have not come up with reasonable types at a coarser detail level; but the unification decisions may still be useful for debugging and program understanding.  So, if a run with \cd{-dumpTypes} leaves unification variables undetermined in positions where you would like to see best-effort guesses instead, consider \cd{-unifyMore}.  Note that \cd{-unifyMore} has no effect when type inference succeeds fully, but it may lead to many more error messages when inference fails.

To output information relevant to CSS stylesheets (and not finish regular compilation), run
\begin{verbatim}
urweb -css P
\end{verbatim}
The first output line is a list of categories of CSS properties that would be worth setting on the document body.  The remaining lines are space-separated pairs of CSS class names and categories of properties that would be worth setting for that class.  The category codes are divided into two varieties.  Codes that reveal properties of a tag or its (recursive) children are \cd{B} for block-level elements, \cd{C} for table captions, \cd{D} for table cells, \cd{L} for lists, and \cd{T} for tables.  Codes that reveal properties of the precise tag that uses a class are \cd{b} for block-level elements, \cd{t} for tables, \cd{d} for table cells, \cd{-} for table rows, \cd{H} for the possibility to set a height, \cd{N} for non-replaced inline-level elements, \cd{R} for replaced inline elements, and \cd{W} for the possibility to set a width.

Ur/Web type inference can take a significant amount of time, so it can be helpful to cache type-inferred versions of source files.  This mode can be activated by running
\begin{verbatim}
urweb daemon start
\end{verbatim}
Further \cd{urweb} invocations in the same working directory will send requests to a background daemon process that reuses type inference results whenever possible, tracking source file dependencies and modification times.  To stop the background daemon, run
\begin{verbatim}
urweb daemon stop
\end{verbatim}
Communication happens via a UNIX domain socket in file \cd{.urweb\_daemon} in the working directory.

\medskip

Some other command-line parameters are accepted:
\begin{itemize}
\item \texttt{-boot}: Run Ur/Web from a build tree (and not from a system install).  This is useful if you're testing the compiler and don't want to install it.  It forces generation of statically linked executables.

\item \texttt{-db <DBSTRING>}: Set database connection information, using the format expected by Postgres's \texttt{PQconnectdb()}, which is \texttt{name1=value1 ... nameN=valueN}.  The same format is also parsed and used to discover connection parameters for MySQL and SQLite.  The only significant settings for MySQL are \texttt{host}, \texttt{hostaddr}, \texttt{port}, \texttt{dbname}, \texttt{user}, and \texttt{password}.  The only significant setting for SQLite is \texttt{dbname}, which is interpreted as the filesystem path to the database.  Additionally, when using SQLite, a database string may be just a file path.

\item \texttt{-dbms [postgres|mysql|sqlite]}: Sets the database backend to use.
  \begin{itemize}
  \item \texttt{postgres}: This is PostgreSQL, the default.  Among the supported engines, Postgres best matches the design philosophy behind Ur, with a focus on consistent views of data, even in the face of much concurrency.  Different database engines have different quirks of SQL syntax.  Ur/Web tends to use Postgres idioms where there are choices to be made, though the compiler translates SQL as needed to support other backends.

    A command sequence like this can initialize a Postgres database, using a file \texttt{app.sql} generated by the compiler:
    \begin{verbatim}
createdb app
psql -f app.sql app
    \end{verbatim}

  \item \texttt{mysql}: This is MySQL, another popular relational database engine that uses persistent server processes.  Ur/Web needs transactions to function properly.  Many installations of MySQL use non-transactional storage engines by default.  Ur/Web generates table definitions that try to use MySQL's InnoDB engine, which supports transactions.  You can edit the first line of a generated \texttt{.sql} file to change this behavior, but it really is true that Ur/Web applications will exhibit bizarre behavior if you choose an engine that ignores transaction commands.

    A command sequence like this can initialize a MySQL database:
    \begin{verbatim}
echo "CREATE DATABASE app" | mysql
mysql -D app <app.sql
    \end{verbatim}

  \item \texttt{sqlite}: This is SQLite, a simple filesystem-based transactional database engine.  With this backend, Ur/Web applications can run without any additional server processes.  The other engines are generally preferred for large-workload performance and full admin feature sets, while SQLite is popular for its low resource footprint and ease of set-up.

    A command like this can initialize an SQLite database:
    \begin{verbatim}
sqlite3 path/to/database/file <app.sql
    \end{verbatim}
  \end{itemize}

\item \texttt{-dumpSource}: When compilation fails, output to stderr the complete source code of the last intermediate program before the compilation phase that signaled the error.  (Warning: these outputs can be very long and aren't especially optimized for readability!)

\item \texttt{-limit class num}: Equivalent to the \texttt{limit} directive from \texttt{.urp} files

\item \texttt{-output FILENAME}: Set where the application executable is written.

\item \texttt{-path NAME VALUE}: Set the value of path variable \texttt{\$NAME} to \texttt{VALUE}, for use in \texttt{.urp} files.

\item \texttt{-prefix PREFIX}: Equivalent to the \texttt{prefix} directive from \texttt{.urp} files

\item \texttt{-protocol [http|cgi|fastcgi|static]}: Set the protocol that the generated application speaks.
  \begin{itemize}
  \item \texttt{http}: This is the default.  It is for building standalone web servers that can be accessed by web browsers directly.

  \item \texttt{cgi}: This is the classic protocol that web servers use to generate dynamic content by spawning new processes.  While Ur/Web programs may in general use message-passing with the \texttt{send} and \texttt{recv} functions, that functionality is not yet supported in CGI, since CGI needs a fresh process for each request, and message-passing needs to use persistent sockets to deliver messages.

    Since Ur/Web treats paths in an unusual way, a configuration line like this one can be used to configure an application that was built with URL prefix \texttt{/Hello}:
    \begin{verbatim}
ScriptAlias /Hello /path/to/hello.exe
    \end{verbatim}

    A different method can be used for, e.g., a shared host, where you can only configure Apache via \texttt{.htaccess} files.  Drop the generated executable into your web space and mark it as CGI somehow.  For instance, if the script ends in \texttt{.exe}, you might put this in \texttt{.htaccess} in the directory containing the script:
    \begin{verbatim}
Options +ExecCGI
AddHandler cgi-script .exe
    \end{verbatim}

    Additionally, make sure that Ur/Web knows the proper URI prefix for your script.  For instance, if the script is accessed via \texttt{http://somewhere/dir/script.exe}, then include this line in your \texttt{.urp} file:
    \begin{verbatim}
prefix /dir/script.exe/
    \end{verbatim}

    To access the \texttt{foo} function in the \texttt{Bar} module, you would then hit \texttt{http://somewhere/dir/script.exe/Bar/foo}.

    If your application contains form handlers that read cookies before causing side effects, then you will need to use the \texttt{sigfile} \texttt{.urp} directive, too.

  \item \texttt{fastcgi}: This is a newer protocol inspired by CGI, wherein web servers can start and reuse persistent external processes to generate dynamic content.  Ur/Web doesn't implement the whole protocol, but Ur/Web's support has been tested to work with the \texttt{mod\_fastcgi}s of Apache and lighttpd.

    To configure a FastCGI program with Apache, one could combine the above \texttt{ScriptAlias} line with a line like this:
    \begin{verbatim}
FastCgiServer /path/to/hello.exe -idle-timeout 99999
    \end{verbatim}
    The idle timeout is only important for applications that use message-passing.  Client connections may go long periods without receiving messages, and Apache tries to be helpful and garbage collect them in such cases.  To prevent that behavior, we specify how long a connection must be idle to be collected.

    Also see the discussion of the \cd{prefix} directive for CGI above; similar configuration is likely to be necessary for FastCGI.  An Ur/Web application won't generally run correctly if it doesn't have a unique URI prefix assigned to it and configured with \cd{prefix}.

    Here is some lighttpd configuration for the same application.
    \begin{verbatim}
fastcgi.server = (
  "/Hello/" =>
  (( "bin-path" => "/path/to/hello.exe",
  "socket" => "/tmp/hello",
  "check-local" => "disable",
  "docroot" => "/",
  "max-procs" => "1"
  ))
)
    \end{verbatim}
    The least obvious requirement is setting \texttt{max-procs} to 1, so that lighttpd doesn't try to multiplex requests across multiple external processes.  This is required for message-passing applications, where a single database of client connections is maintained within a multi-threaded server process.  Multiple processes may, however, be used safely with applications that don't use message-passing.

    A FastCGI process reads the environment variable \texttt{URWEB\_NUM\_THREADS} to determine how many threads to spawn for handling client requests.  The default is 1.

  \item \texttt{static}: This protocol may be used to generate static web pages from Ur/Web code.  The output executable expects a single command-line argument, giving the URI of a page to generate.  For instance, this argument might be \cd{/main}, in which case a static HTTP response for that page will be written to stdout.
  \end{itemize}

\item \texttt{-root Name PATH}: Trigger an alternate module convention for all source files found in directory \texttt{PATH} or any of its subdirectories.  Any file \texttt{PATH/foo.ur} defines a module \texttt{Name.Foo} instead of the usual \texttt{Foo}.  Any file \texttt{PATH/subdir/foo.ur} defines a module \texttt{Name.Subdir.Foo}, and so on for arbitrary nesting of subdirectories.

\item \texttt{-sigfile PATH}: Same as the \texttt{sigfile} directive in \texttt{.urp} files

\item \texttt{-sql FILENAME}: Set where a database set-up SQL script is written.

\item \texttt{-static}: Link the runtime system statically.  The default is to link against dynamic libraries.
\end{itemize}

There is an additional convenience method for invoking \texttt{urweb}.  If the main argument is \texttt{FOO}, and \texttt{FOO.ur} exists but \texttt{FOO.urp} doesn't, then the invocation is interpreted as if called on a \texttt{.urp} file containing \texttt{FOO} as its only main entry, with an additional \texttt{rewrite all FOO/*} directive.

\subsection{Tutorial Formatting}

The Ur/Web compiler also supports rendering of nice HTML tutorials from Ur source files, when invoked like \cd{urweb -tutorial DIR}.  The directory \cd{DIR} is examined for files whose names end in \cd{.ur}.  Every such file is translated into a \cd{.html} version.

These input files follow normal Ur syntax, with a few exceptions:
\begin{itemize}
\item The first line must be a comment like \cd{(* TITLE *)}, where \cd{TITLE} is a string of your choice that will be used as the title of the output page.
\item While most code in the output HTML will be formatted as a monospaced code listing, text in regular Ur comments is formatted as normal English text.
\item A comment like \cd{(* * HEADING *)} introduces a section heading, with text \cd{HEADING} of your choice.
\item To include both a rendering of an Ur expression and a pretty-printed version of its value, bracket the expression with \cd{(* begin eval *)} and \cd{(* end *)}.  The result of expression evaluation is pretty-printed with \cd{show}, so the expression type must belong to that type class.
\item To include code that should not be shown in the tutorial (e.g., to add a \cd{show} instance to use with \cd{eval}), bracket the code with \cd{(* begin hide *)} and \cd{(* end *)}.
\end{itemize}

A word of warning: as for demo generation, tutorial generation calls Emacs to syntax-highlight Ur code.

\subsection{Run-Time Options}

Compiled applications consult a few environment variables to modify their behavior:

\begin{itemize}
  \item \cd{URWEB\_NUM\_THREADS}: alternative to the \cd{-t} command-line argument (currently used only by FastCGI)
  \item \cd{URWEB\_STACK\_SIZE}: size of per-thread stacks, in bytes
  \item \cd{URWEB\_PQ\_CON}: when using PostgreSQL, overrides the compiled-in connection string
\end{itemize}


\section{Ur Syntax}

In this section, we describe the syntax of Ur, deferring to a later section discussion of most of the syntax specific to SQL and XML.  The sole exceptions are the declaration forms for relations, cookies, and styles.

\subsection{Lexical Conventions}

We give the Ur language definition in \LaTeX $\;$ math mode, since that is prettier than monospaced ASCII.  The corresponding ASCII syntax can be read off directly.  Here is the key for mapping math symbols to ASCII character sequences.

\begin{center}
  \begin{tabular}{rl}
    \textbf{\LaTeX} & \textbf{ASCII} \\
    $\to$ & \cd{->} \\
    $\longrightarrow$ & \cd{-{}->} \\
    $\times$ & \cd{*} \\
    $\lambda$ & \cd{fn} \\
    $\Rightarrow$ & \cd{=>} \\
    $\Longrightarrow$ & \cd{==>} \\
    $\neq$ & \cd{<>} \\
    $\leq$ & \cd{<=} \\
    $\geq$ & \cd{>=} \\
    \\
    $x$ & Normal textual identifier, not beginning with an uppercase letter \\
    $X$ & Normal textual identifier, beginning with an uppercase letter \\
  \end{tabular}
\end{center}

We often write syntax like $e^*$ to indicate zero or more copies of $e$, $e^+$ to indicate one or more copies, and $e,^*$ and $e,^+$ to indicate multiple copies separated by commas.  Another separator may be used in place of a comma.  The $e$ term may be surrounded by parentheses to indicate grouping; those parentheses should not be included in the actual ASCII.

We write $\ell$ for literals of the primitive types, for the most part following C conventions.  There are $\mt{int}$, $\mt{float}$, $\mt{char}$, and $\mt{string}$ literals.  Character literals follow the SML convention instead of the C convention, written like \texttt{\#"a"} instead of \texttt{'a'}.

This version of the manual doesn't include operator precedences; see \texttt{src/urweb.grm} for that.

As in the ML language family, the syntax \texttt{(* ... *)} is used for (nestable) comments.  Within XML literals, Ur/Web also supports the usual \texttt{<!-- ... -->} XML comments.

\subsection{\label{core}Core Syntax}

\emph{Kinds} classify types and other compile-time-only entities.  Each kind in the grammar is listed with a description of the sort of data it classifies.
$$\begin{array}{rrcll}
  \textrm{Kinds} & \kappa &::=& \mt{Type} & \textrm{proper types} \\
  &&& \mt{Unit} & \textrm{the trivial constructor} \\
  &&& \mt{Name} & \textrm{field names} \\
  &&& \kappa \to \kappa & \textrm{type-level functions} \\
  &&& \{\kappa\} & \textrm{type-level records} \\
  &&& (\kappa\times^+) & \textrm{type-level tuples} \\
  &&& X & \textrm{variable} \\
  &&& X \longrightarrow \kappa & \textrm{kind-polymorphic type-level function} \\
  &&& \_\_ & \textrm{wildcard} \\
  &&& (\kappa) & \textrm{explicit precedence} \\
\end{array}$$

Ur supports several different notions of functions that take types as arguments.  These arguments can be either implicit, causing them to be inferred at use sites; or explicit, forcing them to be specified manually at use sites.  There is a common explicitness annotation convention applied at the definitions of and in the types of such functions.
$$\begin{array}{rrcll}
  \textrm{Explicitness} & ? &::=& :: & \textrm{explicit} \\
  &&& ::: & \textrm{implicit}
\end{array}$$

\emph{Constructors} are the main class of compile-time-only data.  They include proper types and are classified by kinds.
$$\begin{array}{rrcll}
  \textrm{Constructors} & c, \tau &::=& (c) :: \kappa & \textrm{kind annotation} \\
  &&& \hat{x} & \textrm{constructor variable} \\
  \\
  &&& \tau \to \tau & \textrm{function type} \\
  &&& x \; ? \; \kappa \to \tau & \textrm{polymorphic function type} \\
  &&& X \longrightarrow \tau & \textrm{kind-polymorphic function type} \\
  &&& \$ c & \textrm{record type} \\
  \\
  &&& c \; c & \textrm{type-level function application} \\
  &&& \lambda x \; :: \; \kappa \Rightarrow c & \textrm{type-level function abstraction} \\
  \\
  &&& X \Longrightarrow c & \textrm{type-level kind-polymorphic function abstraction} \\
  &&& c [\kappa] & \textrm{type-level kind-polymorphic function application} \\
  \\
  &&& () & \textrm{type-level unit} \\
  &&& \#X & \textrm{field name} \\
  \\
  &&& [(c = c)^*] & \textrm{known-length type-level record} \\
  &&& c \rc c & \textrm{type-level record concatenation} \\
  &&& \mt{map} & \textrm{type-level record map} \\
  \\
  &&& (c,^+) & \textrm{type-level tuple} \\
  &&& c.n & \textrm{type-level tuple projection ($n \in \mathbb N^+$)} \\
  \\
  &&& [c \sim c] \Rightarrow \tau & \textrm{guarded type} \\
  \\
  &&& \_ :: \kappa & \textrm{wildcard} \\
  &&& (c) & \textrm{explicit precedence} \\
  \\
  \textrm{Qualified uncapitalized variables} & \hat{x} &::=& x & \textrm{not from a module} \\
  &&& M.x & \textrm{projection from a module} \\
\end{array}$$

We include both abstraction and application for kind polymorphism, but applications are only inferred internally; they may not be written explicitly in source programs.  Also, in the ``known-length type-level record'' form, in $c_1 = c_2$ terms, the parser currently only allows $c_1$ to be of the forms $X$ (as a shorthand for $\#X$) or $x$, or a natural number to stand for the corresponding field name (e.g., for tuples).

Modules of the module system are described by \emph{signatures}.
$$\begin{array}{rrcll}
  \textrm{Signatures} & S &::=& \mt{sig} \; s^* \; \mt{end} & \textrm{constant} \\
  &&& X & \textrm{variable} \\
  &&& \mt{functor}(X : S) : S & \textrm{functor} \\
  &&& S \; \mt{where} \; \mt{con} \; x = c & \textrm{concretizing an abstract constructor} \\
  &&& M.X & \textrm{projection from a module} \\
  \\
  \textrm{Signature items} & s &::=& \mt{con} \; x :: \kappa & \textrm{abstract constructor} \\
  &&& \mt{con} \; x :: \kappa = c & \textrm{concrete constructor} \\
  &&& \mt{datatype} \; x \; x^* = dc\mid^+ & \textrm{algebraic datatype definition} \\
  &&& \mt{datatype} \; x = \mt{datatype} \; M.x & \textrm{algebraic datatype import} \\
  &&& \mt{val} \; x : \tau & \textrm{value} \\
  &&& \mt{structure} \; X : S & \textrm{sub-module} \\
  &&& \mt{signature} \; X = S & \textrm{sub-signature} \\
  &&& \mt{include} \; S & \textrm{signature inclusion} \\
  &&& \mt{constraint} \; c \sim c & \textrm{record disjointness constraint} \\
  &&& \mt{class} \; x :: \kappa & \textrm{abstract constructor class} \\
  &&& \mt{class} \; x :: \kappa = c & \textrm{concrete constructor class} \\
  \\
  \textrm{Datatype constructors} & dc &::=& X & \textrm{nullary constructor} \\
  &&& X \; \mt{of} \; \tau & \textrm{unary constructor} \\
\end{array}$$

\emph{Patterns} are used to describe structural conditions on expressions, such that expressions may be tested against patterns, generating assignments to pattern variables if successful.
$$\begin{array}{rrcll}
  \textrm{Patterns} & p &::=& \_ & \textrm{wildcard} \\
  &&& x & \textrm{variable} \\
  &&& \ell & \textrm{constant} \\
  &&& \hat{X} & \textrm{nullary constructor} \\
  &&& \hat{X} \; p & \textrm{unary constructor} \\
  &&& \{(x = p,)^*\} & \textrm{rigid record pattern} \\
  &&& \{(x = p,)^+, \ldots\} & \textrm{flexible record pattern} \\
  &&& p : \tau & \textrm{type annotation} \\
  &&& (p) & \textrm{explicit precedence} \\
  \\
  \textrm{Qualified capitalized variables} & \hat{X} &::=& X & \textrm{not from a module} \\
  &&& M.X & \textrm{projection from a module} \\
\end{array}$$

\emph{Expressions} are the main run-time entities, corresponding to both ``expressions'' and ``statements'' in mainstream imperative languages.
$$\begin{array}{rrcll}
  \textrm{Expressions} & e &::=& e : \tau & \textrm{type annotation} \\
  &&& \hat{x} & \textrm{variable} \\
  &&& \hat{X} & \textrm{datatype constructor} \\
  &&& \ell & \textrm{constant} \\
  \\
  &&& e \; e & \textrm{function application} \\
  &&& \lambda x : \tau \Rightarrow e & \textrm{function abstraction} \\
  &&& e [c] & \textrm{polymorphic function application} \\
  &&& \lambda [x \; ? \; \kappa] \Rightarrow e & \textrm{polymorphic function abstraction} \\
  &&& e [\kappa] & \textrm{kind-polymorphic function application} \\
  &&& X \Longrightarrow e & \textrm{kind-polymorphic function abstraction} \\
  \\
  &&& \{(c = e,)^*\} & \textrm{known-length record} \\
  &&& e.c & \textrm{record field projection} \\
  &&& e \rc e & \textrm{record concatenation} \\
  &&& e \rcut c & \textrm{removal of a single record field} \\
  &&& e \rcutM c & \textrm{removal of multiple record fields} \\
  \\
  &&& \mt{let} \; ed^* \; \mt{in} \; e \; \mt{end} & \textrm{local definitions} \\
  \\
  &&& \mt{case} \; e \; \mt{of} \; (p \Rightarrow e|)^+ & \textrm{pattern matching} \\
  \\
  &&& \lambda [c \sim c] \Rightarrow e & \textrm{guarded expression abstraction} \\
  &&& e \; ! & \textrm{guarded expression application} \\
  \\
  &&& \_ & \textrm{wildcard} \\
  &&& (e) & \textrm{explicit precedence} \\
  \\
  \textrm{Local declarations} & ed &::=& \cd{val} \; x : \tau = e & \textrm{non-recursive value} \\
  &&& \cd{val} \; \cd{rec} \; (x : \tau = e \; \cd{and})^+ & \textrm{mutually recursive values} \\
\end{array}$$

As with constructors, we include both abstraction and application for kind polymorphism, but applications are only inferred internally.

\emph{Declarations} primarily bring new symbols into context.
$$\begin{array}{rrcll}
  \textrm{Declarations} & d &::=& \mt{con} \; x :: \kappa = c & \textrm{constructor synonym} \\
  &&& \mt{datatype} \; x \; x^* = dc\mid^+ & \textrm{algebraic datatype definition} \\
  &&& \mt{datatype} \; x = \mt{datatype} \; M.x & \textrm{algebraic datatype import} \\
  &&& \mt{val} \; x : \tau = e & \textrm{value} \\
  &&& \mt{val} \; \cd{rec} \; (x : \tau = e \; \mt{and})^+ & \textrm{mutually recursive values} \\
  &&& \mt{structure} \; X : S = M & \textrm{module definition} \\
  &&& \mt{signature} \; X = S & \textrm{signature definition} \\
  &&& \mt{open} \; M & \textrm{module inclusion} \\
  &&& \mt{constraint} \; c \sim c & \textrm{record disjointness constraint} \\
  &&& \mt{open} \; \mt{constraints} \; M & \textrm{inclusion of just the constraints from a module} \\
  &&& \mt{table} \; x : c & \textrm{SQL table} \\
  &&& \mt{view} \; x = e & \textrm{SQL view} \\
  &&& \mt{sequence} \; x & \textrm{SQL sequence} \\
  &&& \mt{cookie} \; x : \tau & \textrm{HTTP cookie} \\
  &&& \mt{style} \; x : \tau & \textrm{CSS class} \\
  &&& \mt{task} \; e = e & \textrm{recurring task} \\
  \\
  \textrm{Modules} & M &::=& \mt{struct} \; d^* \; \mt{end} & \textrm{constant} \\
  &&& X & \textrm{variable} \\
  &&& M.X & \textrm{projection} \\
  &&& M(M) & \textrm{functor application} \\
  &&& \mt{functor}(X : S) : S = M & \textrm{functor abstraction} \\
\end{array}$$

There are two kinds of Ur files.  A file named $M\texttt{.ur}$ is an \emph{implementation file}, and it should contain a sequence of declarations $d^*$.  A file named $M\texttt{.urs}$ is an \emph{interface file}; it must always have a matching $M\texttt{.ur}$ and should contain a sequence of signature items $s^*$.  When both files are present, the overall effect is the same as a monolithic declaration $\mt{structure} \; M : \mt{sig} \; s^* \; \mt{end} = \mt{struct} \; d^* \; \mt{end}$.  When no interface file is included, the overall effect is similar, with a signature for module $M$ being inferred rather than just checked against an interface.

We omit some extra possibilities in $\mt{table}$ syntax, deferring them to Section \ref{tables}.  The concrete syntax of $\mt{view}$ declarations is also more complex than shown in the table above, with details deferred to Section \ref{tables}.

\subsection{Shorthands}

There are a variety of derived syntactic forms that elaborate into the core syntax from the last subsection.  We will present the additional forms roughly following the order in which we presented the constructs that they elaborate into.

In many contexts where record fields are expected, like in a projection $e.c$, a constant field may be written as simply $X$, rather than $\#X$.

A record type may be written $\{(c = c,)^*\}$, which elaborates to $\$[(c = c,)^*]$.

The notation $[c_1, \ldots, c_n]$ is shorthand for $[c_1 = (), \ldots, c_n = ()]$.

A tuple type $\tau_1 \times \ldots \times \tau_n$ expands to a record type $\{1 : \tau_1, \ldots, n : \tau_n\}$, with natural numbers as field names.  A tuple expression $(e_1, \ldots, e_n)$ expands to a record expression $\{1 = e_1, \ldots, n = e_n\}$.  A tuple pattern $(p_1, \ldots, p_n)$ expands to a rigid record pattern $\{1 = p_1, \ldots, n = p_n\}$.  Positive natural numbers may be used in most places where field names would be allowed.

The syntax $()$ expands to $\{\}$ as a pattern or expression.

In general, several adjacent $\lambda$ forms may be combined into one, and kind and type annotations may be omitted, in which case they are implicitly included as wildcards.  More formally, for constructor-level abstractions, we can define a new non-terminal $b ::= x \mid (x :: \kappa) \mid X$ and allow composite abstractions of the form $\lambda b^+ \Rightarrow c$, elaborating into the obvious sequence of one core $\lambda$ per element of $b^+$.

Further, the signature item or declaration syntax $\mt{con} \; x \; b^+ = c$ is shorthand for wrapping of the appropriate $\lambda$s around the righthand side $c$.  The $b$ elements may not include $X$, and there may also be an optional $:: \kappa$ before the $=$.

In some contexts, the parser isn't happy with token sequences like $x :: \_$, to indicate a constructor variable of wildcard kind.  In such cases, write the second two tokens as $::\hspace{-.05in}\_$, with no intervening spaces.  Analogous syntax $:::\hspace{-.05in}\_$ is available for implicit constructor arguments.

For any signature item or declaration that defines some entity to be equal to $A$ with classification annotation $B$ (e.g., $\mt{val} \; x : B = A$), $B$ and the preceding colon (or similar punctuation) may be omitted, in which case it is filled in as a wildcard.

A signature item or declaration $\mt{type} \; x$ or $\mt{type} \; x = \tau$ is elaborated into $\mt{con} \; x :: \mt{Type}$ or $\mt{con} \; x :: \mt{Type} = \tau$, respectively.

A signature item or declaration $\mt{class} \; x = \lambda y \Rightarrow c$ may be abbreviated $\mt{class} \; x \; y = c$.

Handling of implicit and explicit constructor arguments may be tweaked with some prefixes to variable references.  An expression $@x$ is a version of $x$ where all type class instance and disjointness arguments have been made explicit.  (For the purposes of this paragraph, the type family $\mt{Top.folder}$ is a type class, though it isn't marked as one by the usual means; and any record type is considered to be a type class instance type when every field's type is a type class instance type.)  An expression $@@x$ achieves the same effect, additionally making explicit all implicit constructor arguments.  The default is that implicit arguments are inserted automatically after any reference to a variable, or after any application of a variable to one or more arguments.  For such an expression, implicit wildcard arguments are added for the longest prefix of the expression's type consisting only of implicit polymorphism, type class instances, and disjointness obligations.  The same syntax works for variables projected out of modules and for capitalized variables (datatype constructors).

At the expression level, an analogue is available of the composite $\lambda$ form for constructors.  We define the language of binders as $b ::= p \mid [x] \mid [x \; ? \; \kappa] \mid X \mid [c \sim c]$.  A lone variable $[x]$ stands for an implicit constructor variable of unspecified kind.  The standard value-level function binder is recovered as the type-annotated pattern form $x : \tau$.  It is a compile-time error to include a pattern $p$ that does not match every value of the appropriate type.

A local $\mt{val}$ declaration may bind a pattern instead of just a plain variable.  As for function arguments, only irrefutable patterns are legal.

The keyword $\mt{fun}$ is a shorthand for $\mt{val} \; \mt{rec}$ that allows arguments to be specified before the equal sign in the definition of each mutually recursive function, as in SML.  Each curried argument must follow the grammar of the $b$ non-terminal introduced two paragraphs ago.  A $\mt{fun}$ declaration is elaborated into a version that adds additional $\lambda$s to the fronts of the righthand sides, as appropriate.

A signature item $\mt{functor} \; X_1 \; (X_2 : S_1) : S_2$ is elaborated into $\mt{structure} \; X_1 : \mt{functor}(X_2 : S_1) : S_2$.  A declaration $\mt{functor} \; X_1 \; (X_2 : S_1) : S_2 = M$ is elaborated into $\mt{structure} \; X_1 : \mt{functor}(X_2 : S_1) : S_2 = \mt{functor}(X_2 : S_1) : S_2 = M$.

An $\mt{open} \; \mt{constraints}$ declaration is implicitly inserted for the argument of every functor at the beginning of the functor body.  For every declaration of the form $\mt{structure} \; X : S = \mt{struct} \ldots \mt{end}$, an $\mt{open} \; \mt{constraints} \; X$ declaration is implicitly inserted immediately afterward.

A declaration $\mt{table} \; x : \{(c = c,)^*\}$ is elaborated into $\mt{table} \; x : [(c = c,)^*]$.

The syntax $\mt{where} \; \mt{type}$ is an alternate form of $\mt{where} \; \mt{con}$.

The syntax $\mt{if} \; e \; \mt{then} \; e_1 \; \mt{else} \; e_2$ expands to $\mt{case} \; e \; \mt{of} \; \mt{Basis}.\mt{True} \Rightarrow e_1 \mid \mt{Basis}.\mt{False} \Rightarrow e_2$.

There are infix operator syntaxes for a number of functions defined in the $\mt{Basis}$ module.  There is $=$ for $\mt{eq}$, $\neq$ for $\mt{neq}$, $-$ for $\mt{neg}$ (as a prefix operator) and $\mt{minus}$, $+$ for $\mt{plus}$, $\times$ for $\mt{times}$, $/$ for $\mt{div}$, $\%$ for $\mt{mod}$, $<$ for $\mt{lt}$, $\leq$ for $\mt{le}$, $>$ for $\mt{gt}$, and $\geq$ for $\mt{ge}$.

A signature item $\mt{table} \; x : c$ is shorthand for $\mt{val} \; x : \mt{Basis}.\mt{sql\_table} \; c \; []$.  $\mt{view} \; x : c$ is shorthand for $\mt{val} \; x : \mt{Basis}.\mt{sql\_view} \; c$, $\mt{sequence} \; x$ is short for $\mt{val} \; x : \mt{Basis}.\mt{sql\_sequence}$.  $\mt{cookie} \; x : \tau$ is shorthand for $\mt{val} \; x : \mt{Basis}.\mt{http\_cookie} \; \tau$, and $\mt{style} \; x$ is shorthand for $\mt{val} \; x : \mt{Basis}.\mt{css\_class}$.


\section{Static Semantics}

In this section, we give a declarative presentation of Ur's typing rules and related judgments.  Inference is the subject of the next section; here, we assume that an oracle has filled in all wildcards with concrete values.

Since there is significant mutual recursion among the judgments, we introduce them all before beginning to give rules.  We use the same variety of contexts throughout this section, implicitly introducing new sorts of context entries as needed.
\begin{itemize}
\item $\Gamma \vdash \kappa$ expresses kind well-formedness.
\item $\Gamma \vdash c :: \kappa$ assigns a kind to a constructor in a context.
\item $\Gamma \vdash c \sim c$ proves the disjointness of two record constructors; that is, that they share no field names.  We overload the judgment to apply to pairs of field names as well.
\item $\Gamma \vdash c \hookrightarrow C$ proves that record constructor $c$ decomposes into set $C$ of field names and record constructors.
\item $\Gamma \vdash c \equiv c$ proves the computational equivalence of two constructors.  This is often called a \emph{definitional equality} in the world of type theory.
\item $\Gamma \vdash e : \tau$ is a standard typing judgment.
\item $\Gamma \vdash p \leadsto \Gamma; \tau$ combines typing of patterns with calculation of which new variables they bind.
\item $\Gamma \vdash d \leadsto \Gamma$ expresses how a declaration modifies a context.  We overload this judgment to apply to sequences of declarations, as well as to signature items and sequences of signature items.
\item $\Gamma \vdash S \equiv S$ is the signature equivalence judgment.
\item $\Gamma \vdash S \leq S$ is the signature compatibility judgment.  We write $\Gamma \vdash S$ as shorthand for $\Gamma \vdash S \leq S$.
\item $\Gamma \vdash M : S$ is the module signature checking judgment.
\item $\mt{proj}(M, \overline{s}, V)$ is a partial function for projecting a signature item from $\overline{s}$, given the module $M$ that we project from.  $V$ may be $\mt{con} \; x$, $\mt{datatype} \; x$, $\mt{val} \; x$, $\mt{signature} \; X$, or $\mt{structure} \; X$.  The parameter $M$ is needed because the projected signature item may refer to other items from $\overline{s}$.
\item $\mt{selfify}(M, \overline{s})$ adds information to signature items $\overline{s}$ to reflect the fact that we are concerned with the particular module $M$.  This function is overloaded to work over individual signature items as well.
\end{itemize}


\subsection{Kind Well-Formedness}

$$\infer{\Gamma \vdash \mt{Type}}{}
\quad \infer{\Gamma \vdash \mt{Unit}}{}
\quad \infer{\Gamma \vdash \mt{Name}}{}
\quad \infer{\Gamma \vdash \kappa_1 \to \kappa_2}{
  \Gamma \vdash \kappa_1
  & \Gamma \vdash \kappa_2
}
\quad \infer{\Gamma \vdash \{\kappa\}}{
  \Gamma \vdash \kappa
}
\quad \infer{\Gamma \vdash (\kappa_1 \times \ldots \times \kappa_n)}{
  \forall i: \Gamma \vdash \kappa_i
}$$

$$\infer{\Gamma \vdash X}{
  X \in \Gamma
}
\quad \infer{\Gamma \vdash X \longrightarrow \kappa}{
  \Gamma, X \vdash \kappa
}$$

\subsection{Kinding}

We write $[X \mapsto \kappa_1]\kappa_2$ for capture-avoiding substitution of $\kappa_1$ for $X$ in $\kappa_2$.

$$\infer{\Gamma \vdash (c) :: \kappa :: \kappa}{
  \Gamma \vdash c :: \kappa
}
\quad \infer{\Gamma \vdash x :: \kappa}{
  x :: \kappa \in \Gamma
}
\quad \infer{\Gamma \vdash x :: \kappa}{
  x :: \kappa = c \in \Gamma
}$$

$$\infer{\Gamma \vdash M.x :: \kappa}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{con} \; x) = \kappa
}
\quad \infer{\Gamma \vdash M.x :: \kappa}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{con} \; x) = (\kappa, c)
}$$

$$\infer{\Gamma \vdash \tau_1 \to \tau_2 :: \mt{Type}}{
  \Gamma \vdash \tau_1 :: \mt{Type}
  & \Gamma \vdash \tau_2 :: \mt{Type}
}
\quad \infer{\Gamma \vdash x \; ? \: \kappa \to \tau :: \mt{Type}}{
  \Gamma, x :: \kappa \vdash \tau :: \mt{Type}
}
\quad \infer{\Gamma \vdash X \longrightarrow \tau :: \mt{Type}}{
  \Gamma, X \vdash \tau :: \mt{Type}
}
\quad \infer{\Gamma \vdash \$c :: \mt{Type}}{
  \Gamma \vdash c :: \{\mt{Type}\}
}$$

$$\infer{\Gamma \vdash c_1 \; c_2 :: \kappa_2}{
  \Gamma \vdash c_1 :: \kappa_1 \to \kappa_2
  & \Gamma \vdash c_2 :: \kappa_1
}
\quad \infer{\Gamma \vdash \lambda x \; :: \; \kappa_1 \Rightarrow c :: \kappa_1 \to \kappa_2}{
  \Gamma, x :: \kappa_1 \vdash c :: \kappa_2
}$$

$$\infer{\Gamma \vdash c[\kappa'] :: [X \mapsto \kappa']\kappa}{
  \Gamma \vdash c :: X \to \kappa
  & \Gamma \vdash \kappa'
}
\quad \infer{\Gamma \vdash X \Longrightarrow c :: X \to \kappa}{
  \Gamma, X \vdash c :: \kappa
}$$

$$\infer{\Gamma \vdash () :: \mt{Unit}}{}
\quad \infer{\Gamma \vdash \#X :: \mt{Name}}{}$$

$$\infer{\Gamma \vdash [\overline{c_i = c'_i}] :: \{\kappa\}}{
  \forall i: \Gamma \vdash c_i : \mt{Name}
  & \Gamma \vdash c'_i :: \kappa
  & \forall i \neq j: \Gamma \vdash c_i \sim c_j
}
\quad \infer{\Gamma \vdash c_1 \rc c_2 :: \{\kappa\}}{
  \Gamma \vdash c_1 :: \{\kappa\}
  & \Gamma \vdash c_2 :: \{\kappa\}
  & \Gamma \vdash c_1 \sim c_2
}$$

$$\infer{\Gamma \vdash \mt{map} :: (\kappa_1 \to \kappa_2) \to \{\kappa_1\} \to \{\kappa_2\}}{}$$

$$\infer{\Gamma \vdash (\overline c) :: (\kappa_1 \times \ldots \times \kappa_n)}{
  \forall i: \Gamma \vdash c_i :: \kappa_i
}
\quad \infer{\Gamma \vdash c.i :: \kappa_i}{
  \Gamma \vdash c :: (\kappa_1 \times \ldots \times \kappa_n)
}$$

$$\infer{\Gamma \vdash \lambda [c_1 \sim c_2] \Rightarrow \tau :: \mt{Type}}{
  \Gamma \vdash c_1 :: \{\kappa\}
  & \Gamma \vdash c_2 :: \{\kappa'\}
  & \Gamma, c_1 \sim c_2 \vdash \tau :: \mt{Type}
}$$

\subsection{Record Disjointness}

$$\infer{\Gamma \vdash c_1 \sim c_2}{
  \Gamma \vdash c_1 \hookrightarrow C_1
  & \Gamma \vdash c_2 \hookrightarrow C_2
  & \forall c'_1 \in C_1, c'_2 \in C_2: \Gamma \vdash c'_1 \sim c'_2
}
\quad \infer{\Gamma \vdash X \sim X'}{
  X \neq X'
}$$

$$\infer{\Gamma \vdash c_1 \sim c_2}{
  c'_1 \sim c'_2 \in \Gamma
  & \Gamma \vdash c'_1 \hookrightarrow C_1
  & \Gamma \vdash c'_2 \hookrightarrow C_2
  & c_1 \in C_1
  & c_2 \in C_2
}$$

$$\infer{\Gamma \vdash c \hookrightarrow \{c\}}{}
\quad \infer{\Gamma \vdash [\overline{c = c'}] \hookrightarrow \{\overline{c}\}}{}
\quad \infer{\Gamma \vdash c_1 \rc c_2 \hookrightarrow C_1 \cup C_2}{
  \Gamma \vdash c_1 \hookrightarrow C_1
  & \Gamma \vdash c_2 \hookrightarrow C_2
}
\quad \infer{\Gamma \vdash c \hookrightarrow C}{
  \Gamma \vdash c \equiv c'
  & \Gamma \vdash c' \hookrightarrow C
}
\quad \infer{\Gamma \vdash \mt{map} \; f \; c \hookrightarrow C}{
  \Gamma \vdash c \hookrightarrow C
}$$

\subsection{\label{definitional}Definitional Equality}

We use $\mathcal C$ to stand for a one-hole context that, when filled, yields a constructor.  The notation $\mathcal C[c]$ plugs $c$ into $\mathcal C$.  We omit the standard definition of one-hole contexts.  We write $[x \mapsto c_1]c_2$ for capture-avoiding substitution of $c_1$ for $x$ in $c_2$, with analogous notation for substituting a kind in a constructor.

$$\infer{\Gamma \vdash c \equiv c}{}
\quad \infer{\Gamma \vdash c_1 \equiv c_2}{
  \Gamma \vdash c_2 \equiv c_1
}
\quad \infer{\Gamma \vdash c_1 \equiv c_3}{
  \Gamma \vdash c_1 \equiv c_2
  & \Gamma \vdash c_2 \equiv c_3
}
\quad \infer{\Gamma \vdash \mathcal C[c_1] \equiv \mathcal C[c_2]}{
  \Gamma \vdash c_1 \equiv c_2
}$$

$$\infer{\Gamma \vdash x \equiv c}{
  x :: \kappa = c \in \Gamma
}
\quad \infer{\Gamma \vdash M.x \equiv c}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{con} \; x) = (\kappa, c)
}
\quad \infer{\Gamma \vdash (\overline c).i \equiv c_i}{}$$

$$\infer{\Gamma \vdash (\lambda x :: \kappa \Rightarrow c) \; c' \equiv [x \mapsto c'] c}{}
\quad \infer{\Gamma \vdash (X \Longrightarrow c) [\kappa] \equiv [X \mapsto \kappa] c}{}$$

$$\infer{\Gamma \vdash c_1 \rc c_2 \equiv c_2 \rc c_1}{}
\quad \infer{\Gamma \vdash c_1 \rc (c_2 \rc c_3) \equiv (c_1 \rc c_2) \rc c_3}{}$$

$$\infer{\Gamma \vdash [] \rc c \equiv c}{}
\quad \infer{\Gamma \vdash [\overline{c_1 = c'_1}] \rc [\overline{c_2 = c'_2}] \equiv [\overline{c_1 = c'_1}, \overline{c_2 = c'_2}]}{}$$

$$\infer{\Gamma \vdash \mt{map} \; f \; [] \equiv []}{}
\quad \infer{\Gamma \vdash \mt{map} \; f \; ([c_1 = c_2] \rc c) \equiv [c_1 = f \; c_2] \rc \mt{map} \; f \; c}{}$$

$$\infer{\Gamma \vdash \mt{map} \; (\lambda x \Rightarrow x) \; c \equiv c}{}
\quad \infer{\Gamma \vdash \mt{map} \; f \; (\mt{map} \; f' \; c)
  \equiv \mt{map} \; (\lambda x \Rightarrow f \; (f' \; x)) \; c}{}$$

$$\infer{\Gamma \vdash \mt{map} \; f \; (c_1 \rc c_2) \equiv \mt{map} \; f \; c_1 \rc \mt{map} \; f \; c_2}{}$$

\subsection{Expression Typing}

We assume the existence of a function $T$ assigning types to literal constants.  It maps integer constants to $\mt{Basis}.\mt{int}$, float constants to $\mt{Basis}.\mt{float}$, character constants to $\mt{Basis}.\mt{char}$, and string constants to $\mt{Basis}.\mt{string}$.

We also refer to a function $\mathcal I$, such that $\mathcal I(\tau)$ ``uses an oracle'' to instantiate all constructor function arguments at the beginning of $\tau$ that are marked implicit; i.e., replace $x_1 ::: \kappa_1 \to \ldots \to x_n ::: \kappa_n \to \tau$ with $[x_1 \mapsto c_1]\ldots[x_n \mapsto c_n]\tau$, where the $c_i$s are inferred and $\tau$ does not start like $x ::: \kappa \to \tau'$.

$$\infer{\Gamma \vdash e : \tau : \tau}{
  \Gamma \vdash e : \tau
}
\quad \infer{\Gamma \vdash e : \tau}{
  \Gamma \vdash e : \tau'
  & \Gamma \vdash \tau' \equiv \tau
}
\quad \infer{\Gamma \vdash \ell : T(\ell)}{}$$

$$\infer{\Gamma \vdash x : \mathcal I(\tau)}{
  x : \tau \in \Gamma
}
\quad \infer{\Gamma \vdash M.x : \mathcal I(\tau)}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{val} \; x) = \tau
}
\quad \infer{\Gamma \vdash X : \mathcal I(\tau)}{
  X : \tau \in \Gamma
}
\quad \infer{\Gamma \vdash M.X : \mathcal I(\tau)}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{val} \; X) = \tau
}$$

$$\infer{\Gamma \vdash e_1 \; e_2 : \tau_2}{
  \Gamma \vdash e_1 : \tau_1 \to \tau_2
  & \Gamma \vdash e_2 : \tau_1
}
\quad \infer{\Gamma \vdash \lambda x : \tau_1 \Rightarrow e : \tau_1 \to \tau_2}{
  \Gamma, x : \tau_1 \vdash e : \tau_2
}$$

$$\infer{\Gamma \vdash e [c] : [x \mapsto c]\tau}{
  \Gamma \vdash e : x :: \kappa \to \tau
  & \Gamma \vdash c :: \kappa
}
\quad \infer{\Gamma \vdash \lambda [x \; ? \; \kappa] \Rightarrow e : x \; ? \; \kappa \to \tau}{
  \Gamma, x :: \kappa \vdash e : \tau
}$$

$$\infer{\Gamma \vdash e [\kappa] : [X \mapsto \kappa]\tau}{
  \Gamma \vdash e : X \longrightarrow \tau
  & \Gamma \vdash \kappa
}
\quad \infer{\Gamma \vdash X \Longrightarrow e : X \longrightarrow \tau}{
  \Gamma, X \vdash e : \tau
}$$

$$\infer{\Gamma \vdash \{\overline{c = e}\} : \{\overline{c : \tau}\}}{
  \forall i: \Gamma \vdash c_i :: \mt{Name}
  & \Gamma \vdash e_i : \tau_i
  & \forall i \neq j: \Gamma \vdash c_i \sim c_j
}
\quad \infer{\Gamma \vdash e.c : \tau}{
  \Gamma \vdash e : \$([c = \tau] \rc c')
}
\quad \infer{\Gamma \vdash e_1 \rc e_2 : \$(c_1 \rc c_2)}{
  \Gamma \vdash e_1 : \$c_1
  & \Gamma \vdash e_2 : \$c_2
  & \Gamma \vdash c_1 \sim c_2
}$$

$$\infer{\Gamma \vdash e \rcut c : \$c'}{
  \Gamma \vdash e : \$([c = \tau] \rc c')
}
\quad \infer{\Gamma \vdash e \rcutM c : \$c'}{
  \Gamma \vdash e : \$(c \rc c')
}$$

$$\infer{\Gamma \vdash \mt{let} \; \overline{ed} \; \mt{in} \; e \; \mt{end} : \tau}{
  \Gamma \vdash \overline{ed} \leadsto \Gamma'
  & \Gamma' \vdash e : \tau
}
\quad \infer{\Gamma \vdash \mt{case} \; e \; \mt{of} \; \overline{p \Rightarrow e} : \tau}{
  \forall i: \Gamma \vdash p_i \leadsto \Gamma_i, \tau'
  & \Gamma_i \vdash e_i : \tau
}$$

$$\infer{\Gamma \vdash \lambda [c_1 \sim c_2] \Rightarrow e : \lambda [c_1 \sim c_2] \Rightarrow \tau}{
  \Gamma \vdash c_1 :: \{\kappa\}
  & \Gamma \vdash c_2 :: \{\kappa'\}
  & \Gamma, c_1 \sim c_2 \vdash e : \tau
}
\quad \infer{\Gamma \vdash e \; ! : \tau}{
  \Gamma \vdash e : [c_1 \sim c_2] \Rightarrow \tau
  & \Gamma \vdash c_1 \sim c_2
}$$

\subsection{Pattern Typing}

$$\infer{\Gamma \vdash \_ \leadsto \Gamma; \tau}{}
\quad \infer{\Gamma \vdash x \leadsto \Gamma, x : \tau; \tau}{}
\quad \infer{\Gamma \vdash \ell \leadsto \Gamma; T(\ell)}{}$$

$$\infer{\Gamma \vdash X \leadsto \Gamma; \overline{[x_i \mapsto \tau'_i]}\tau}{
  X : \overline{x ::: \mt{Type}} \to \tau \in \Gamma
  & \textrm{$\tau$ not a function type}
}
\quad \infer{\Gamma \vdash X \; p \leadsto \Gamma'; \overline{[x_i \mapsto \tau'_i]}\tau}{
  X : \overline{x ::: \mt{Type}} \to \tau'' \to \tau \in \Gamma
  & \Gamma \vdash p \leadsto \Gamma'; \overline{[x_i \mapsto \tau'_i]}\tau''
}$$

$$\infer{\Gamma \vdash M.X \leadsto \Gamma; \overline{[x_i \mapsto \tau'_i]}\tau}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{val} \; X) = \overline{x ::: \mt{Type}} \to \tau
  & \textrm{$\tau$ not a function type}
}$$

$$\infer{\Gamma \vdash M.X \; p \leadsto \Gamma'; \overline{[x_i \mapsto \tau'_i]}\tau}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{val} \; X) = \overline{x ::: \mt{Type}} \to \tau'' \to \tau
  & \Gamma \vdash p \leadsto \Gamma'; \overline{[x_i \mapsto \tau'_i]}\tau''
}$$

$$\infer{\Gamma \vdash \{\overline{x = p}\} \leadsto \Gamma_n; \{\overline{x = \tau}\}}{
  \Gamma_0 = \Gamma
  & \forall i: \Gamma_i \vdash p_i \leadsto \Gamma_{i+1}; \tau_i
}
\quad \infer{\Gamma \vdash \{\overline{x = p}, \ldots\} \leadsto \Gamma_n; \$([\overline{x = \tau}] \rc c)}{
  \Gamma_0 = \Gamma
  & \forall i: \Gamma_i \vdash p_i \leadsto \Gamma_{i+1}; \tau_i
}$$

$$\infer{\Gamma \vdash p : \tau \leadsto \Gamma'; \tau}{
  \Gamma \vdash p \leadsto \Gamma'; \tau'
  & \Gamma \vdash \tau' \equiv \tau
}$$

\subsection{Declaration Typing}

We use an auxiliary judgment $\overline{y}; x; \Gamma \vdash \overline{dc} \leadsto \Gamma'$, expressing the enrichment of $\Gamma$ with the types of the datatype constructors $\overline{dc}$, when they are known to belong to datatype $x$ with type parameters $\overline{y}$.

We presuppose the existence of a function $\mathcal O$, where $\mathcal O(M, \overline{s})$ implements the $\mt{open}$ declaration by producing a context with the appropriate entry for each available component of module $M$ with signature items $\overline{s}$.  Where possible, $\mathcal O$ uses ``transparent'' entries (e.g., an abstract type $M.x$ is mapped to $x :: \mt{Type} = M.x$), so that the relationship with $M$ is maintained.  A related function $\mathcal O_c$ builds a context containing the disjointness constraints found in $\overline s$.
We write $\kappa_1^n \to \kappa$ as a shorthand, where $\kappa_1^0 \to \kappa = \kappa$ and $\kappa_1^{n+1} \to \kappa_2 = \kappa_1 \to (\kappa_1^n \to \kappa_2)$.  We write $\mt{len}(\overline{y})$ for the length of vector $\overline{y}$ of variables.

$$\infer{\Gamma \vdash \cdot \leadsto \Gamma}{}
\quad \infer{\Gamma \vdash d, \overline{d} \leadsto \Gamma''}{
  \Gamma \vdash d \leadsto \Gamma'
  & \Gamma' \vdash \overline{d} \leadsto \Gamma''
}$$

$$\infer{\Gamma \vdash \mt{con} \; x :: \kappa = c \leadsto \Gamma, x :: \kappa = c}{
  \Gamma \vdash c :: \kappa
}
\quad \infer{\Gamma \vdash \mt{datatype} \; x \; \overline{y} = \overline{dc} \leadsto \Gamma'}{
  \overline{y}; x; \Gamma, x :: \mt{Type}^{\mt{len}(\overline y)} \to \mt{Type} \vdash \overline{dc} \leadsto \Gamma'
}$$

$$\infer{\Gamma \vdash \mt{datatype} \; x = \mt{datatype} \; M.z \leadsto \Gamma'}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{datatype} \; z) = (\overline{y}, \overline{dc})
  & \overline{y}; x; \Gamma, x :: \mt{Type}^{\mt{len}(\overline y)} \to \mt{Type} = M.z \vdash \overline{dc} \leadsto \Gamma'
}$$

$$\infer{\Gamma \vdash \mt{val} \; x : \tau = e \leadsto \Gamma, x : \tau}{
  \Gamma \vdash e : \tau
}$$

$$\infer{\Gamma \vdash \mt{val} \; \mt{rec} \; \overline{x : \tau = e} \leadsto \Gamma, \overline{x : \tau}}{
  \forall i: \Gamma, \overline{x : \tau} \vdash e_i : \tau_i
  & \textrm{$e_i$ starts with an expression $\lambda$, optionally preceded by constructor and disjointness $\lambda$s}
}$$

$$\infer{\Gamma \vdash \mt{structure} \; X : S = M \leadsto \Gamma, X : S}{
  \Gamma \vdash M : S
  & \textrm{ $M$ not a constant or application}
}
\quad \infer{\Gamma \vdash \mt{structure} \; X : S = M \leadsto \Gamma, X : \mt{selfify}(X, \overline{s})}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
}$$

$$\infer{\Gamma \vdash \mt{signature} \; X = S \leadsto \Gamma, X = S}{
  \Gamma \vdash S
}$$

$$\infer{\Gamma \vdash \mt{open} \; M \leadsto \Gamma, \mathcal O(M, \overline{s})}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
}$$

$$\infer{\Gamma \vdash \mt{constraint} \; c_1 \sim c_2 \leadsto \Gamma}{
  \Gamma \vdash c_1 :: \{\kappa\}
  & \Gamma \vdash c_2 :: \{\kappa\}
  & \Gamma \vdash c_1 \sim c_2
}
\quad \infer{\Gamma \vdash \mt{open} \; \mt{constraints} \; M \leadsto \Gamma, \mathcal O_c(M, \overline{s})}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
}$$

$$\infer{\Gamma \vdash \mt{table} \; x : c \leadsto \Gamma, x : \mt{Basis}.\mt{sql\_table} \; c \; []}{
  \Gamma \vdash c :: \{\mt{Type}\}
}
\quad \infer{\Gamma \vdash \mt{view} \; x = e \leadsto \Gamma, x : \mt{Basis}.\mt{sql\_view} \; c}{
  \Gamma \vdash e :: \mt{Basis}.\mt{sql\_query} \; [] \; [] \; (\mt{map} \; (\lambda \_ \Rightarrow []) \; c') \; c
}$$

$$\infer{\Gamma \vdash \mt{sequence} \; x \leadsto \Gamma, x : \mt{Basis}.\mt{sql\_sequence}}{}$$

$$\infer{\Gamma \vdash \mt{cookie} \; x : \tau \leadsto \Gamma, x : \mt{Basis}.\mt{http\_cookie} \; \tau}{
  \Gamma \vdash \tau :: \mt{Type}
}
\quad \infer{\Gamma \vdash \mt{style} \; x \leadsto \Gamma, x : \mt{Basis}.\mt{css\_class}}{}$$

$$\infer{\Gamma \vdash \mt{task} \; e_1 = e_2 \leadsto \Gamma}{
  \Gamma \vdash e_1 :: \mt{Basis}.\mt{task\_kind} \; \tau
  & \Gamma \vdash e_2 :: \tau \to \mt{Basis}.\mt{transaction} \; \{\}
}$$

$$\infer{\overline{y}; x; \Gamma \vdash \cdot \leadsto \Gamma}{}
\quad \infer{\overline{y}; x; \Gamma \vdash X \mid \overline{dc} \leadsto \Gamma', X : \overline{y ::: \mt{Type}} \to x \; \overline{y}}{
  \overline{y}; x; \Gamma \vdash \overline{dc} \leadsto \Gamma'
}
\quad \infer{\overline{y}; x; \Gamma \vdash X \; \mt{of} \; \tau \mid \overline{dc} \leadsto \Gamma', X : \overline{y ::: \mt{Type}} \to \tau \to x \; \overline{y}}{
  \overline{y}; x; \Gamma \vdash \overline{dc} \leadsto \Gamma'
}$$

\subsection{Signature Item Typing}

We appeal to a signature item analogue of the $\mathcal O$ function from the last subsection.

This is the first judgment where we deal with constructor classes, for the $\mt{class}$ forms.  We will omit their special handling in this formal specification.  Section \ref{typeclasses} gives an informal description of how constructor classes influence type inference.

$$\infer{\Gamma \vdash \cdot \leadsto \Gamma}{}
\quad \infer{\Gamma \vdash s, \overline{s} \leadsto \Gamma''}{
  \Gamma \vdash s \leadsto \Gamma'
  & \Gamma' \vdash \overline{s} \leadsto \Gamma''
}$$

$$\infer{\Gamma \vdash \mt{con} \; x :: \kappa \leadsto \Gamma, x :: \kappa}{}
\quad \infer{\Gamma \vdash \mt{con} \; x :: \kappa = c \leadsto \Gamma, x :: \kappa = c}{
  \Gamma \vdash c :: \kappa
}
\quad \infer{\Gamma \vdash \mt{datatype} \; x \; \overline{y} = \overline{dc} \leadsto \Gamma'}{
  \overline{y}; x; \Gamma, x :: \mt{Type}^{\mt{len}(\overline y)} \to \mt{Type} \vdash \overline{dc} \leadsto \Gamma'
}$$

$$\infer{\Gamma \vdash \mt{datatype} \; x = \mt{datatype} \; M.z \leadsto \Gamma'}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{datatype} \; z) = (\overline{y}, \overline{dc})
  & \overline{y}; x; \Gamma, x :: \mt{Type}^{\mt{len}(\overline y)} \to \mt{Type} = M.z \vdash \overline{dc} \leadsto \Gamma'
}$$

$$\infer{\Gamma \vdash \mt{val} \; x : \tau \leadsto \Gamma, x : \tau}{
  \Gamma \vdash \tau :: \mt{Type}
}$$

$$\infer{\Gamma \vdash \mt{structure} \; X : S \leadsto \Gamma, X : S}{
  \Gamma \vdash S
}
\quad \infer{\Gamma \vdash \mt{signature} \; X = S \leadsto \Gamma, X = S}{
  \Gamma \vdash S
}$$

$$\infer{\Gamma \vdash \mt{include} \; S \leadsto \Gamma, \mathcal O(\overline{s})}{
  \Gamma \vdash S
  & \Gamma \vdash S \equiv \mt{sig} \; \overline{s} \; \mt{end}
}$$

$$\infer{\Gamma \vdash \mt{constraint} \; c_1 \sim c_2 \leadsto \Gamma, c_1 \sim c_2}{
  \Gamma \vdash c_1 :: \{\kappa\}
  & \Gamma \vdash c_2 :: \{\kappa\}
}$$

$$\infer{\Gamma \vdash \mt{class} \; x :: \kappa = c \leadsto \Gamma, x :: \kappa = c}{
  \Gamma \vdash c :: \kappa
}
\quad \infer{\Gamma \vdash \mt{class} \; x :: \kappa \leadsto \Gamma, x :: \kappa}{}$$

\subsection{Signature Compatibility}

To simplify the judgments in this section, we assume that all signatures are alpha-varied as necessary to avoid including multiple bindings for the same identifier.  This is in addition to the usual alpha-variation of locally bound variables.

We rely on a judgment $\Gamma \vdash \overline{s} \leq s'$, which expresses the occurrence in signature items $\overline{s}$ of an item compatible with $s'$.  We also use a judgment $\Gamma \vdash \overline{dc} \leq \overline{dc}$, which expresses compatibility of datatype definitions.

$$\infer{\Gamma \vdash S \equiv S}{}
\quad \infer{\Gamma \vdash S_1 \equiv S_2}{
  \Gamma \vdash S_2 \equiv S_1
}
\quad \infer{\Gamma \vdash X \equiv S}{
  X = S \in \Gamma
}
\quad \infer{\Gamma \vdash M.X \equiv S}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{signature} \; X) = S
}$$

$$\infer{\Gamma \vdash S \; \mt{where} \; \mt{con} \; x = c \equiv \mt{sig} \; \overline{s^1} \; \mt{con} \; x :: \kappa = c \; \overline{s_2} \; \mt{end}}{
  \Gamma \vdash S \equiv \mt{sig} \; \overline{s^1} \; \mt{con} \; x :: \kappa \; \overline{s_2} \; \mt{end}
  & \Gamma \vdash c :: \kappa
}
\quad \infer{\Gamma \vdash \mt{sig} \; \overline{s^1} \; \mt{include} \; S \; \overline{s^2} \; \mt{end} \equiv \mt{sig} \; \overline{s^1} \; \overline{s} \; \overline{s^2} \; \mt{end}}{
  \Gamma \vdash S \equiv \mt{sig} \; \overline{s} \; \mt{end}
}$$

$$\infer{\Gamma \vdash S_1 \leq S_2}{
  \Gamma \vdash S_1 \equiv S_2
}
\quad \infer{\Gamma \vdash \mt{sig} \; \overline{s} \; \mt{end} \leq \mt{sig} \; \mt{end}}{}
\quad \infer{\Gamma \vdash \mt{sig} \; \overline{s} \; \mt{end} \leq \mt{sig} \; s' \; \overline{s'} \; \mt{end}}{
  \Gamma \vdash \overline{s} \leq s'
  & \Gamma \vdash s' \leadsto \Gamma'
  & \Gamma' \vdash \mt{sig} \; \overline{s} \; \mt{end} \leq \mt{sig} \; \overline{s'} \; \mt{end}
}$$

$$\infer{\Gamma \vdash s \; \overline{s} \leq s'}{
  \Gamma \vdash s \leq s'
}
\quad \infer{\Gamma \vdash s \; \overline{s} \leq s'}{
  \Gamma \vdash s \leadsto \Gamma'
  & \Gamma' \vdash \overline{s} \leq s'
}$$

$$\infer{\Gamma \vdash \mt{functor} (X : S_1) : S_2 \leq \mt{functor} (X : S'_1) : S'_2}{
  \Gamma \vdash S'_1 \leq S_1
  & \Gamma, X : S'_1 \vdash S_2 \leq S'_2
}$$

$$\infer{\Gamma \vdash \mt{con} \; x :: \kappa \leq \mt{con} \; x :: \kappa}{}
\quad \infer{\Gamma \vdash \mt{con} \; x :: \kappa = c \leq \mt{con} \; x :: \kappa}{}
\quad \infer{\Gamma \vdash \mt{datatype} \; x \; \overline{y} = \overline{dc} \leq \mt{con} \; x :: \mt{Type}^{\mt{len}(\overline y)} \to \mt{Type}}{}$$

$$\infer{\Gamma \vdash \mt{datatype} \; x = \mt{datatype} \; M.z \leq \mt{con} \; x :: \mt{Type}^{\mt{len}(y)} \to \mt{Type}}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{datatype} \; z) = (\overline{y}, \overline{dc})
}$$

$$\infer{\Gamma \vdash \mt{class} \; x :: \kappa \leq \mt{con} \; x :: \kappa}{}
\quad \infer{\Gamma \vdash \mt{class} \; x :: \kappa = c \leq \mt{con} \; x :: \kappa}{}$$

$$\infer{\Gamma \vdash \mt{con} \; x :: \kappa = c_1 \leq \mt{con} \; x :: \mt{\kappa} = c_2}{
  \Gamma \vdash c_1 \equiv c_2
}
\quad \infer{\Gamma \vdash \mt{class} \; x :: \kappa = c_1 \leq \mt{con} \; x :: \kappa = c_2}{
  \Gamma \vdash c_1 \equiv c_2
}$$

$$\infer{\Gamma \vdash \mt{datatype} \; x \; \overline{y} = \overline{dc} \leq \mt{datatype} \; x \; \overline{y} = \overline{dc'}}{
  \Gamma, \overline{y :: \mt{Type}} \vdash \overline{dc} \leq \overline{dc'}
}$$

$$\infer{\Gamma \vdash \mt{datatype} \; x = \mt{datatype} \; M.z \leq \mt{datatype} \; x \; \overline{y} = \overline{dc'}}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{datatype} \; z) = (\overline{y}, \overline{dc})
  & \Gamma, \overline{y :: \mt{Type}} \vdash \overline{dc} \leq \overline{dc'}
}$$

$$\infer{\Gamma \vdash \cdot \leq \cdot}{}
\quad \infer{\Gamma \vdash X; \overline{dc} \leq X; \overline{dc'}}{
  \Gamma \vdash \overline{dc} \leq \overline{dc'}
}
\quad \infer{\Gamma \vdash X \; \mt{of} \; \tau_1; \overline{dc} \leq X \; \mt{of} \; \tau_2; \overline{dc'}}{
  \Gamma \vdash \tau_1 \equiv \tau_2
  & \Gamma \vdash \overline{dc} \leq \overline{dc'}
}$$

$$\infer{\Gamma \vdash \mt{datatype} \; x = \mt{datatype} \; M.z \leq \mt{datatype} \; x = \mt{datatype} \; M'.z'}{
  \Gamma \vdash M.z \equiv M'.z'
}$$

$$\infer{\Gamma \vdash \mt{val} \; x : \tau_1 \leq \mt{val} \; x : \tau_2}{
  \Gamma \vdash \tau_1 \equiv \tau_2
}
\quad \infer{\Gamma \vdash \mt{structure} \; X : S_1 \leq \mt{structure} \; X : S_2}{
  \Gamma \vdash S_1 \leq S_2
}
\quad \infer{\Gamma \vdash \mt{signature} \; X = S_1 \leq \mt{signature} \; X = S_2}{
  \Gamma \vdash S_1 \leq S_2
  & \Gamma \vdash S_2 \leq S_1
}$$

$$\infer{\Gamma \vdash \mt{constraint} \; c_1 \sim c_2 \leq \mt{constraint} \; c'_1 \sim c'_2}{
  \Gamma \vdash c_1 \equiv c'_1
  & \Gamma \vdash c_2 \equiv c'_2
}$$

$$\infer{\Gamma \vdash \mt{class} \; x :: \kappa \leq \mt{class} \; x :: \kappa}{}
\quad \infer{\Gamma \vdash \mt{class} \; x :: \kappa = c \leq \mt{class} \; x :: \kappa}{}
\quad \infer{\Gamma \vdash \mt{class} \; x :: \kappa = c_1 \leq \mt{class} \; x :: \kappa = c_2}{
  \Gamma \vdash c_1 \equiv c_2
}$$

$$\infer{\Gamma \vdash \mt{con} \; x :: \kappa \leq \mt{class} \; x :: \kappa}{}
\quad \infer{\Gamma \vdash \mt{con} \; x :: \kappa = c \leq \mt{class} \; x :: \kappa}{}
\quad \infer{\Gamma \vdash \mt{con} \; x :: \kappa = c_1 \leq \mt{class} \; x :: \kappa = c_2}{
  \Gamma \vdash c_1 \equiv c_2
}$$

\subsection{Module Typing}

We use a helper function $\mt{sigOf}$, which converts declarations and sequences of declarations into their principal signature items and sequences of signature items, respectively.

$$\infer{\Gamma \vdash M : S}{
  \Gamma \vdash M : S'
  & \Gamma \vdash S' \leq S
}
\quad \infer{\Gamma \vdash \mt{struct} \; \overline{d} \; \mt{end} : \mt{sig} \; \mt{sigOf}(\overline{d}) \; \mt{end}}{
  \Gamma \vdash \overline{d} \leadsto \Gamma'
}
\quad \infer{\Gamma \vdash X : S}{
  X : S \in \Gamma
}$$

$$\infer{\Gamma \vdash M.X : S}{
  \Gamma \vdash M : \mt{sig} \; \overline{s} \; \mt{end}
  & \mt{proj}(M, \overline{s}, \mt{structure} \; X) = S
}$$

$$\infer{\Gamma \vdash M_1(M_2) : [X \mapsto M_2]S_2}{
  \Gamma \vdash M_1 : \mt{functor}(X : S_1) : S_2
  & \Gamma \vdash M_2 : S_1
}
\quad \infer{\Gamma \vdash \mt{functor} (X : S_1) : S_2 = M : \mt{functor} (X : S_1) : S_2}{
  \Gamma \vdash S_1
  & \Gamma, X : S_1 \vdash S_2
  & \Gamma, X : S_1 \vdash M : S_2
}$$

\begin{eqnarray*}
  \mt{sigOf}(\cdot) &=& \cdot \\
  \mt{sigOf}(s \; \overline{s'}) &=& \mt{sigOf}(s) \; \mt{sigOf}(\overline{s'}) \\
  \\
  \mt{sigOf}(\mt{con} \; x :: \kappa = c) &=& \mt{con} \; x :: \kappa = c \\
  \mt{sigOf}(\mt{datatype} \; x \; \overline{y} = \overline{dc}) &=& \mt{datatype} \; x \; \overline{y} = \overline{dc} \\
  \mt{sigOf}(\mt{datatype} \; x = \mt{datatype} \; M.z) &=& \mt{datatype} \; x = \mt{datatype} \; M.z \\
  \mt{sigOf}(\mt{val} \; x : \tau = e) &=& \mt{val} \; x : \tau \\
  \mt{sigOf}(\mt{val} \; \mt{rec} \; \overline{x : \tau = e}) &=& \overline{\mt{val} \; x : \tau} \\
  \mt{sigOf}(\mt{structure} \; X : S = M) &=& \mt{structure} \; X : S \\
  \mt{sigOf}(\mt{signature} \; X = S) &=& \mt{signature} \; X = S \\
  \mt{sigOf}(\mt{open} \; M) &=& \mt{include} \; S \textrm{ (where $\Gamma \vdash M : S$)} \\
  \mt{sigOf}(\mt{constraint} \; c_1 \sim c_2) &=& \mt{constraint} \; c_1 \sim c_2 \\
  \mt{sigOf}(\mt{open} \; \mt{constraints} \; M) &=& \cdot \\
  \mt{sigOf}(\mt{table} \; x : c) &=& \mt{table} \; x : c \\
  \mt{sigOf}(\mt{view} \; x = e) &=& \mt{view} \; x : c \textrm{ (where $\Gamma \vdash e : \mt{Basis}.\mt{sql\_query} \; [] \; [] \; (\mt{map} \; (\lambda \_ \Rightarrow []) \; c') \; c$)} \\
  \mt{sigOf}(\mt{sequence} \; x) &=& \mt{sequence} \; x \\
  \mt{sigOf}(\mt{cookie} \; x : \tau) &=& \mt{cookie} \; x : \tau \\
  \mt{sigOf}(\mt{style} \; x) &=& \mt{style} \; x
\end{eqnarray*}
\begin{eqnarray*}
  \mt{selfify}(M, \cdot) &=& \cdot \\
  \mt{selfify}(M, s \; \overline{s'}) &=& \mt{selfify}(M, s) \; \mt{selfify}(M, \overline{s'}) \\
  \\
  \mt{selfify}(M, \mt{con} \; x :: \kappa) &=& \mt{con} \; x :: \kappa = M.x \\
  \mt{selfify}(M, \mt{con} \; x :: \kappa = c) &=& \mt{con} \; x :: \kappa = c \\
  \mt{selfify}(M, \mt{datatype} \; x \; \overline{y} = \overline{dc}) &=& \mt{datatype} \; x \; \overline{y} = \mt{datatype} \; M.x \\
  \mt{selfify}(M, \mt{datatype} \; x = \mt{datatype} \; M'.z) &=& \mt{datatype} \; x = \mt{datatype} \; M'.z \\
  \mt{selfify}(M, \mt{val} \; x : \tau) &=& \mt{val} \; x : \tau \\
  \mt{selfify}(M, \mt{structure} \; X : S) &=& \mt{structure} \; X : \mt{selfify}(M.X, \overline{s}) \textrm{ (where $\Gamma \vdash S \equiv \mt{sig} \; \overline{s} \; \mt{end}$)} \\
  \mt{selfify}(M, \mt{signature} \; X = S) &=& \mt{signature} \; X = S \\
  \mt{selfify}(M, \mt{include} \; S) &=& \mt{include} \; S \\
  \mt{selfify}(M, \mt{constraint} \; c_1 \sim c_2) &=& \mt{constraint} \; c_1 \sim c_2 \\
  \mt{selfify}(M, \mt{class} \; x :: \kappa) &=& \mt{class} \; x :: \kappa = M.x \\
  \mt{selfify}(M, \mt{class} \; x :: \kappa = c) &=& \mt{class} \; x :: \kappa = c \\
\end{eqnarray*}

\subsection{Module Projection}

\begin{eqnarray*}
  \mt{proj}(M, \mt{con} \; x :: \kappa \; \overline{s}, \mt{con} \; x) &=& \kappa \\
  \mt{proj}(M, \mt{con} \; x :: \kappa = c \; \overline{s}, \mt{con} \; x) &=& (\kappa, c) \\
  \mt{proj}(M, \mt{datatype} \; x \; \overline{y} = \overline{dc} \; \overline{s}, \mt{con} \; x) &=& \mt{Type}^{\mt{len}(\overline{y})} \to \mt{Type} \\
  \mt{proj}(M, \mt{datatype} \; x = \mt{datatype} \; M'.z \; \overline{s}, \mt{con} \; x) &=& (\mt{Type}^{\mt{len}(\overline{y})} \to \mt{Type}, M'.z) \textrm{ (where $\Gamma \vdash M' : \mt{sig} \; \overline{s'} \; \mt{end}$} \\
  && \textrm{and $\mt{proj}(M', \overline{s'}, \mt{datatype} \; z) = (\overline{y}, \overline{dc})$)} \\
  \mt{proj}(M, \mt{class} \; x :: \kappa \; \overline{s}, \mt{con} \; x) &=& \kappa \to \mt{Type} \\
  \mt{proj}(M, \mt{class} \; x :: \kappa = c \; \overline{s}, \mt{con} \; x) &=& (\kappa \to \mt{Type}, c) \\
  \\
  \mt{proj}(M, \mt{datatype} \; x \; \overline{y} = \overline{dc} \; \overline{s}, \mt{datatype} \; x) &=& (\overline{y}, \overline{dc}) \\
  \mt{proj}(M, \mt{datatype} \; x = \mt{datatype} \; M'.z \; \overline{s}, \mt{con} \; x) &=& \mt{proj}(M', \overline{s'}, \mt{datatype} \; z) \textrm{ (where $\Gamma \vdash M' : \mt{sig} \; \overline{s'} \; \mt{end}$)} \\
  \\
  \mt{proj}(M, \mt{val} \; x : \tau \; \overline{s}, \mt{val} \; x) &=& \tau \\
  \mt{proj}(M, \mt{datatype} \; x \; \overline{y} = \overline{dc} \; \overline{s}, \mt{val} \; X) &=& \overline{y ::: \mt{Type}} \to M.x \; \overline y \textrm{ (where $X \in \overline{dc}$)} \\
  \mt{proj}(M, \mt{datatype} \; x \; \overline{y} = \overline{dc} \; \overline{s}, \mt{val} \; X) &=& \overline{y ::: \mt{Type}} \to \tau \to M.x \; \overline y \textrm{ (where $X \; \mt{of} \; \tau \in \overline{dc}$)} \\
  \mt{proj}(M, \mt{datatype} \; x = \mt{datatype} \; M'.z, \mt{val} \; X) &=& \overline{y ::: \mt{Type}} \to M.x \; \overline y \textrm{ (where $\Gamma \vdash M' : \mt{sig} \; \overline{s'} \; \mt{end}$} \\
  && \textrm{and $\mt{proj}(M', \overline{s'}, \mt{datatype} \; z = (\overline{y}, \overline{dc})$ and $X \in \overline{dc}$)} \\
  \mt{proj}(M, \mt{datatype} \; x = \mt{datatype} \; M'.z, \mt{val} \; X) &=& \overline{y ::: \mt{Type}} \to \tau \to M.x \; \overline y \textrm{ (where $\Gamma \vdash M' : \mt{sig} \; \overline{s'} \; \mt{end}$} \\
  && \textrm{and $\mt{proj}(M', \overline{s'}, \mt{datatype} \; z = (\overline{y}, \overline{dc})$ and $X \; \mt{of} \; \tau \in \overline{dc}$)} \\
  \\
  \mt{proj}(M, \mt{structure} \; X : S \; \overline{s}, \mt{structure} \; X) &=& S \\
  \\
  \mt{proj}(M, \mt{signature} \; X = S \; \overline{s}, \mt{signature} \; X) &=& S \\
  \\
  \mt{proj}(M, \mt{con} \; x :: \kappa \; \overline{s}, V) &=& [x \mapsto M.x]\mt{proj}(M, \overline{s}, V) \\
  \mt{proj}(M, \mt{con} \; x :: \kappa = c \; \overline{s}, V) &=& [x \mapsto M.x]\mt{proj}(M, \overline{s}, V) \\
  \mt{proj}(M, \mt{datatype} \; x \; \overline{y} = \overline{dc} \; \overline{s}, V) &=& [x \mapsto M.x]\mt{proj}(M, \overline{s}, V) \\
  \mt{proj}(M, \mt{datatype} \; x = \mt{datatype} \; M'.z \; \overline{s}, V) &=& [x \mapsto M.x]\mt{proj}(M, \overline{s}, V) \\
  \mt{proj}(M, \mt{val} \; x : \tau \; \overline{s}, V) &=& \mt{proj}(M, \overline{s}, V) \\
  \mt{proj}(M, \mt{structure} \; X : S \; \overline{s}, V) &=& [X \mapsto M.X]\mt{proj}(M, \overline{s}, V) \\
  \mt{proj}(M, \mt{signature} \; X = S \; \overline{s}, V) &=& [X \mapsto M.X]\mt{proj}(M, \overline{s}, V) \\
  \mt{proj}(M, \mt{include} \; S \; \overline{s}, V) &=& \mt{proj}(M, \overline{s'} \; \overline{s}, V) \textrm{ (where $\Gamma \vdash S \equiv \mt{sig} \; \overline{s'} \; \mt{end}$)} \\
  \mt{proj}(M, \mt{constraint} \; c_1 \sim c_2 \; \overline{s}, V) &=& \mt{proj}(M, \overline{s}, V) \\
  \mt{proj}(M, \mt{class} \; x :: \kappa \; \overline{s}, V) &=& [x \mapsto M.x]\mt{proj}(M, \overline{s}, V) \\
  \mt{proj}(M, \mt{class} \; x :: \kappa = c \; \overline{s}, V) &=& [x \mapsto M.x]\mt{proj}(M, \overline{s}, V) \\
\end{eqnarray*}


\section{Type Inference}

The Ur/Web compiler uses \emph{heuristic type inference}, with no claims of completeness with respect to the declarative specification of the last section.  The rules in use seem to work well in practice.  This section summarizes those rules, to help Ur programmers predict what will work and what won't.

\subsection{Basic Unification}

Type-checkers for languages based on the Hindley-Milner type discipline, like ML and Haskell, take advantage of \emph{principal typing} properties, making complete type inference relatively straightforward.  Inference algorithms are traditionally implemented using type unification variables, at various points asserting equalities between types, in the process discovering the values of type variables.  The Ur/Web compiler uses the same basic strategy, but the complexity of the type system rules out easy completeness.

Type-checking can require evaluating recursive functional programs, thanks to the type-level $\mt{map}$ operator.  When a unification variable appears in such a type, the next step of computation can be undetermined.  The value of that variable might be determined later, but this would be ``too late'' for the unification problems generated at the first occurrence.  This is the essential source of incompleteness.

Nonetheless, the unification engine tends to do reasonably well.  Unlike in ML, polymorphism is never inferred in definitions; it must be indicated explicitly by writing out constructor-level parameters.  By writing these and other annotations, the programmer can generally get the type inference engine to do most of the type reconstruction work.

\subsection{Unifying Record Types}

The type inference engine tries to take advantage of the algebraic rules governing type-level records, as shown in Section \ref{definitional}.  When two constructors of record kind are unified, they are reduced to normal forms, with like terms crossed off from each normal form until, hopefully, nothing remains.  This cannot be complete, with the inclusion of unification variables.  The type-checker can help you understand what goes wrong when the process fails, as it outputs the unmatched remainders of the two normal forms.

\subsection{\label{typeclasses}Constructor Classes}

Ur includes a constructor class facility inspired by Haskell's.  The current version is experimental, with very general Prolog-like facilities that can lead to compile-time non-termination.

Constructor classes are integrated with the module system.  A constructor class of kind $\kappa$ is just a constructor of kind $\kappa$.  By marking such a constructor $c$ as a constructor class, the programmer instructs the type inference engine to, in each scope, record all values of types $c \; c_1 \; \ldots \; c_n$ as \emph{instances}.  Any function argument whose type is of such a form is treated as implicit, to be determined by examining the current instance database.  Any suitably kinded constructor within a module may be exposed as a constructor class from outside the module, simply by using a $\mt{class}$ signature item instead of a $\mt{con}$ signature item in the module's signature.

The ``dictionary encoding'' often used in Haskell implementations is made explicit in Ur.  Constructor class instances are just properly typed values, and they can also be considered as ``proofs'' of membership in the class.  In some cases, it is useful to pass these proofs around explicitly.  An underscore written where a proof is expected will also be inferred, if possible, from the current instance database.

Just as for constructors, constructors classes may be exported from modules, and they may be exported as concrete or abstract.  Concrete constructor classes have their ``real'' definitions exposed, so that client code may add new instances freely.  Automatic inference of concrete class instances will not generally work, so abstract classes are almost always the right choice.  They are useful as ``predicates'' that can be used to enforce invariants, as we will see in some definitions of SQL syntax in the Ur/Web standard library.  Free extension of a concrete class is easily supported by exporting a constructor function from a module, since the class implementation will be concrete within the module.

\subsection{Reverse-Engineering Record Types}

It's useful to write Ur functions and functors that take record constructors as inputs, but these constructors can grow quite long, even though their values are often implied by other arguments.  The compiler uses a simple heuristic to infer the values of unification variables that are mapped over, yielding known results.  If the result is empty, we're done; if it's not empty, we replace a single unification variable with a new constructor formed from three new unification variables, as in $[\alpha = \beta] \rc \gamma$.  This process can often be repeated to determine a unification variable fully.

\subsection{Implicit Arguments in Functor Applications}

Constructor, constraint, and constructor class witness members of structures may be omitted, when those structures are used in contexts where their assigned signatures imply how to fill in those missing members.  This feature combines well with reverse-engineering to allow for uses of complicated meta-programming functors with little more code than would be necessary to invoke an untyped, ad-hoc code generator.


\section{The Ur Standard Library}

The built-in parts of the Ur/Web standard library are described by the signature in \texttt{lib/basis.urs} in the distribution.  A module $\mt{Basis}$ ascribing to that signature is available in the initial environment, and every program is implicitly prefixed by $\mt{open} \; \mt{Basis}$.

Additionally, other common functions that are definable within Ur are included in \texttt{lib/top.urs} and \texttt{lib/top.ur}.  This $\mt{Top}$ module is also opened implicitly.

The idea behind Ur is to serve as the ideal host for embedded domain-specific languages.  For now, however, the ``generic'' functionality is intermixed with Ur/Web-specific functionality, including in these two library modules.  We hope that these generic library components have types that speak for themselves.  The next section introduces the Ur/Web-specific elements.  Here, we only give the type declarations from the beginning of $\mt{Basis}$.
$$\begin{array}{l}
  \mt{type} \; \mt{int} \\
  \mt{type} \; \mt{float} \\
  \mt{type} \; \mt{char} \\
  \mt{type} \; \mt{string} \\
  \mt{type} \; \mt{time} \\
  \mt{type} \; \mt{blob} \\
  \\
  \mt{type} \; \mt{unit} = \{\} \\
  \\
  \mt{datatype} \; \mt{bool} = \mt{False} \mid \mt{True} \\
  \\
  \mt{datatype} \; \mt{option} \; \mt{t} = \mt{None} \mid \mt{Some} \; \mt{of} \; \mt{t} \\
  \\
  \mt{datatype} \; \mt{list} \; \mt{t} = \mt{Nil} \mid \mt{Cons} \; \mt{of} \; \mt{t} \times \mt{list} \; \mt{t}
\end{array}$$

The only unusual element of this list is the $\mt{blob}$ type, which stands for binary sequences.  Simple blobs can be created from strings via $\mt{Basis.textBlob}$.  Blobs will also be generated from HTTP file uploads.

Ur also supports \emph{polymorphic variants}, a dual to extensible records that has been popularized by OCaml.  A type $\mt{variant} \; r$ represents an $n$-ary sum type, with one constructor for each field of record $r$.  Each constructor $c$ takes an argument of type $r.c$; the type $\{\}$ can be used to ``simulate'' a nullary constructor.  The \cd{make} function builds a variant value, while \cd{match} implements pattern-matching, with match cases represented as records of functions.
$$\begin{array}{l}
  \mt{con} \; \mt{variant} :: \{\mt{Type}\} \to \mt{Type} \\
  \mt{val} \; \mt{make} : \mt{nm} :: \mt{Name} \to \mt{t} ::: \mt{Type} \to \mt{ts} ::: \{\mt{Type}\} \to [[\mt{nm}] \sim \mt{ts}] \Rightarrow \mt{t} \to \mt{variant} \; ([\mt{nm} = \mt{t}] \rc \mt{ts}) \\
  \mt{val} \; \mt{match} : \mt{ts} ::: \{\mt{Type}\} \to \mt{t} ::: \mt{Type} \to \mt{variant} \; \mt{ts} \to \$(\mt{map} \; (\lambda \mt{t'} \Rightarrow \mt{t'} \to \mt{t}) \; \mt{ts}) \to \mt{t}
\end{array}$$

Another important generic Ur element comes at the beginning of \texttt{top.urs}.

$$\begin{array}{l}
  \mt{con} \; \mt{folder} :: \mt{K} \longrightarrow \{\mt{K}\} \to \mt{Type} \\
  \\
  \mt{val} \; \mt{fold} : \mt{K} \longrightarrow \mt{tf} :: (\{\mt{K}\} \to \mt{Type}) \\
  \hspace{.1in} \to (\mt{nm} :: \mt{Name} \to \mt{v} :: \mt{K} \to \mt{r} :: \{\mt{K}\} \to [[\mt{nm}] \sim \mt{r}] \Rightarrow \\
  \hspace{.2in} \mt{tf} \; \mt{r} \to \mt{tf} \; ([\mt{nm} = \mt{v}] \rc \mt{r})) \\
  \hspace{.1in} \to \mt{tf} \; [] \\
  \hspace{.1in} \to \mt{r} :: \{\mt{K}\} \to \mt{folder} \; \mt{r} \to \mt{tf} \; \mt{r}
\end{array}$$

For a type-level record $\mt{r}$, a $\mt{folder} \; \mt{r}$ encodes a permutation of $\mt{r}$'s elements.  The $\mt{fold}$ function can be called on a $\mt{folder}$ to iterate over the elements of $\mt{r}$ in that order.  $\mt{fold}$ is parameterized on a type-level function to be used to calculate the type of each intermediate result of folding.  After processing a subset $\mt{r'}$ of $\mt{r}$'s entries, the type of the accumulator should be $\mt{tf} \; \mt{r'}$.  The next two expression arguments to $\mt{fold}$ are the usual step function and initial accumulator, familiar from fold functions over lists.  The final two arguments are the record to fold over and a $\mt{folder}$ for it.

The Ur compiler treats $\mt{folder}$ like a constructor class, using built-in rules to infer $\mt{folder}$s for records with known structure.  The order in which field names are mentioned in source code is used as a hint about the permutation that the programmer would like.


\section{The Ur/Web Standard Library}

Some operations are only allowed in server-side code or only in client-side code.  The type system does not enforce such restrictions, but the compiler enforces them in the process of whole-program compilation.  In the discussion below, we note when a set of operations has a location restriction.

\subsection{Monads}

The Ur Basis defines the monad constructor class from Haskell.

$$\begin{array}{l}
  \mt{class} \; \mt{monad} :: \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{return} : \mt{m} ::: (\mt{Type} \to \mt{Type}) \to \mt{t} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{monad} \; \mt{m} \\
  \hspace{.1in} \to \mt{t} \to \mt{m} \; \mt{t} \\
  \mt{val} \; \mt{bind} : \mt{m} ::: (\mt{Type} \to \mt{Type}) \to \mt{t1} ::: \mt{Type} \to \mt{t2} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{monad} \; \mt{m} \\
  \hspace{.1in} \to \mt{m} \; \mt{t1} \to (\mt{t1} \to \mt{m} \; \mt{t2}) \\
  \hspace{.1in} \to \mt{m} \; \mt{t2} \\
  \mt{val} \; \mt{mkMonad} : \mt{m} ::: (\mt{Type} \to \mt{Type}) \\
  \hspace{.1in} \to \{\mt{Return} : \mt{t} ::: \mt{Type} \to \mt{t} \to \mt{m} \; \mt{t}, \\
  \hspace{.3in} \mt{Bind} : \mt{t1} ::: \mt{Type} \to \mt{t2} ::: \mt{Type} \to \mt{m} \; \mt{t1} \to (\mt{t1} \to \mt{m} \; \mt{t2}) \to \mt{m} \; \mt{t2}\} \\
  \hspace{.1in} \to \mt{monad} \; \mt{m}
\end{array}$$

The Ur/Web compiler provides syntactic sugar for monads, similar to Haskell's \cd{do} notation.  An expression $x \leftarrow e_1; e_2$ is desugared to $\mt{bind} \; e_1 \; (\lambda x \Rightarrow e_2)$, and an expression $e_1; e_2$ is desugared to $\mt{bind} \; e_1 \; (\lambda () \Rightarrow e_2)$.  Note a difference from Haskell: as the $e_1; e_2$ case desugaring involves a function with $()$ as its formal argument, the type of $e_1$ must be of the form $m \; \{\}$, rather than some arbitrary $m \; t$.

\subsection{Transactions}

Ur is a pure language; we use Haskell's trick to support controlled side effects.  The standard library defines a monad $\mt{transaction}$, meant to stand for actions that may be undone cleanly.  By design, no other kinds of actions are supported.
$$\begin{array}{l}
  \mt{con} \; \mt{transaction} :: \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{transaction\_monad} : \mt{monad} \; \mt{transaction}
\end{array}$$

For debugging purposes, a transactional function is provided for outputting a string on the server process' \texttt{stderr}.
$$\begin{array}{l}
  \mt{val} \; \mt{debug} : \mt{string} \to \mt{transaction} \; \mt{unit}
\end{array}$$

\subsection{HTTP}

There are transactions for reading an HTTP header by name and for getting and setting strongly typed cookies.  Cookies may only be created by the $\mt{cookie}$ declaration form, ensuring that they be named consistently based on module structure.  For now, cookie operations are server-side only.
$$\begin{array}{l}
  \mt{con} \; \mt{http\_cookie} :: \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{getCookie} : \mt{t} ::: \mt{Type} \to \mt{http\_cookie} \; \mt{t} \to \mt{transaction} \; (\mt{option} \; \mt{t}) \\
  \mt{val} \; \mt{setCookie} : \mt{t} ::: \mt{Type} \to \mt{http\_cookie} \; \mt{t} \to \{\mt{Value} : \mt{t}, \mt{Expires} : \mt{option} \; \mt{time}, \mt{Secure} : \mt{bool}\} \to \mt{transaction} \; \mt{unit} \\
  \mt{val} \; \mt{clearCookie} : \mt{t} ::: \mt{Type} \to \mt{http\_cookie} \; \mt{t} \to \mt{transaction} \; \mt{unit}
\end{array}$$

There are also an abstract $\mt{url}$ type and functions for converting to it, based on the policy defined by \texttt{[allow|deny] url} directives in the project file.
$$\begin{array}{l}
  \mt{type} \; \mt{url} \\
  \mt{val} \; \mt{bless} : \mt{string} \to \mt{url} \\
  \mt{val} \; \mt{checkUrl} : \mt{string} \to \mt{option} \; \mt{url}
\end{array}$$
$\mt{bless}$ raises a runtime error if the string passed to it fails the URL policy.

It is possible to grab the current page's URL or to build a URL for an arbitrary transaction that would also be an acceptable value of a \texttt{link} attribute of the \texttt{a} tag.  These are server-side operations.
$$\begin{array}{l}
  \mt{val} \; \mt{currentUrl} : \mt{transaction} \; \mt{url} \\
  \mt{val} \; \mt{url} : \mt{transaction} \; \mt{page} \to \mt{url}
\end{array}$$

Page generation may be interrupted at any time with a request to redirect to a particular URL instead.
$$\begin{array}{l}
  \mt{val} \; \mt{redirect} : \mt{t} ::: \mt{Type} \to \mt{url} \to \mt{transaction} \; \mt{t}
\end{array}$$

It's possible for pages to return files of arbitrary MIME types.  A file can be input from the user using this data type, along with the $\mt{upload}$ form tag.  These functions and those described in the following paragraph are server-side.
$$\begin{array}{l}
  \mt{type} \; \mt{file} \\
  \mt{val} \; \mt{fileName} : \mt{file} \to \mt{option} \; \mt{string} \\
  \mt{val} \; \mt{fileMimeType} : \mt{file} \to \mt{string} \\
  \mt{val} \; \mt{fileData} : \mt{file} \to \mt{blob}
\end{array}$$

It is also possible to get HTTP request headers and environment variables, and set HTTP response headers, using abstract types similar to the one for URLs.

$$\begin{array}{l}
  \mt{type} \; \mt{requestHeader} \\
  \mt{val} \; \mt{blessRequestHeader} : \mt{string} \to \mt{requestHeader} \\
  \mt{val} \; \mt{checkRequestHeader} : \mt{string} \to \mt{option} \; \mt{requestHeader} \\
  \mt{val} \; \mt{getHeader} : \mt{requestHeader} \to \mt{transaction} \; (\mt{option} \; \mt{string}) \\
  \\
  \mt{type} \; \mt{envVar} \\
  \mt{val} \; \mt{blessEnvVar} : \mt{string} \to \mt{envVar} \\
  \mt{val} \; \mt{checkEnvVar} : \mt{string} \to \mt{option} \; \mt{envVar} \\
  \mt{val} \; \mt{getenv} : \mt{envVar} \to \mt{transaction} \; (\mt{option} \; \mt{string}) \\
  \\
  \mt{type} \; \mt{responseHeader} \\
  \mt{val} \; \mt{blessResponseHeader} : \mt{string} \to \mt{responseHeader} \\
  \mt{val} \; \mt{checkResponseHeader} : \mt{string} \to \mt{option} \; \mt{responseHeader} \\
  \mt{val} \; \mt{setHeader} : \mt{responseHeader} \to \mt{string} \to \mt{transaction} \; \mt{unit}
\end{array}$$

A blob can be extracted from a file and returned as the page result.  There are bless and check functions for MIME types analogous to those for URLs.
$$\begin{array}{l}
  \mt{type} \; \mt{mimeType} \\
  \mt{val} \; \mt{blessMime} : \mt{string} \to \mt{mimeType} \\
  \mt{val} \; \mt{checkMime} : \mt{string} \to \mt{option} \; \mt{mimeType} \\
  \mt{val} \; \mt{returnBlob} : \mt{t} ::: \mt{Type} \to \mt{blob} \to \mt{mimeType} \to \mt{transaction} \; \mt{t}
\end{array}$$

\subsection{SQL}

Everything about SQL database access is restricted to server-side code.

The fundamental unit of interest in the embedding of SQL is tables, described by a type family and creatable only via the $\mt{table}$ declaration form.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_table} :: \{\mt{Type}\} \to \{\{\mt{Unit}\}\} \to \mt{Type}
\end{array}$$
The first argument to this constructor gives the names and types of a table's columns, and the second argument gives the set of valid keys.  Keys are the only subsets of the columns that may be referenced as foreign keys.  Each key has a name.

We also have the simpler type family of SQL views, which have no keys.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_view} :: \{\mt{Type}\} \to \mt{Type}
\end{array}$$

A multi-parameter type class is used to allow tables and views to be used interchangeably, with a way of extracting the set of columns from each.
$$\begin{array}{l}
  \mt{class} \; \mt{fieldsOf} :: \mt{Type} \to \{\mt{Type}\} \to \mt{Type} \\
  \mt{val} \; \mt{fieldsOf\_table} : \mt{fs} ::: \{\mt{Type}\} \to \mt{keys} ::: \{\{\mt{Unit}\}\} \to \mt{fieldsOf} \; (\mt{sql\_table} \; \mt{fs} \; \mt{keys}) \; \mt{fs} \\
  \mt{val} \; \mt{fieldsOf\_view} : \mt{fs} ::: \{\mt{Type}\} \to \mt{fieldsOf} \; (\mt{sql\_view} \; \mt{fs}) \; \mt{fs}
\end{array}$$

\subsubsection{Table Constraints}

Tables may be declared with constraints, such that database modifications that violate the constraints are blocked.  A table may have at most one \texttt{PRIMARY KEY} constraint, which gives the subset of columns that will most often be used to look up individual rows in the table.

$$\begin{array}{l}
  \mt{con} \; \mt{primary\_key} :: \{\mt{Type}\} \to \{\{\mt{Unit}\}\} \to \mt{Type} \\
  \mt{val} \; \mt{no\_primary\_key} : \mt{fs} ::: \{\mt{Type}\} \to \mt{primary\_key} \; \mt{fs} \; [] \\
  \mt{val} \; \mt{primary\_key} : \mt{rest} ::: \{\mt{Type}\} \to \mt{t} ::: \mt{Type} \to \mt{key1} :: \mt{Name} \to \mt{keys} :: \{\mt{Type}\} \\
  \hspace{.1in} \to [[\mt{key1}] \sim \mt{keys}] \Rightarrow [[\mt{key1} = \mt{t}] \rc \mt{keys} \sim \mt{rest}] \\
  \hspace{.1in} \Rightarrow \$([\mt{key1} = \mt{sql\_injectable\_prim} \; \mt{t}] \rc \mt{map} \; \mt{sql\_injectable\_prim} \; \mt{keys}) \\
  \hspace{.1in} \to \mt{primary\_key} \; ([\mt{key1} = \mt{t}] \rc \mt{keys} \rc \mt{rest}) \; [\mt{Pkey} = [\mt{key1}] \rc \mt{map} \; (\lambda \_ \Rightarrow ()) \; \mt{keys}]
\end{array}$$
The type class $\mt{sql\_injectable\_prim}$ characterizes which types are allowed in SQL and are not $\mt{option}$ types.  In SQL, a \texttt{PRIMARY KEY} constraint enforces after-the-fact that a column may not contain \texttt{NULL}s, but Ur/Web forces that information to be included in table types from the beginning.  Thus, the only effect of this kind of constraint in Ur/Web is to enforce uniqueness of the given key within the table.

A type family stands for sets of named constraints of the remaining varieties.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_constraints} :: \{\mt{Type}\} \to \{\{\mt{Unit}\}\} \to \mt{Type}
\end{array}$$
The first argument gives the column types of the table being constrained, and the second argument maps constraint names to the keys that they define.  Constraints that don't define keys are mapped to ``empty keys.''

There is a type family of individual, unnamed constraints.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_constraint} :: \{\mt{Type}\} \to \{\mt{Unit}\} \to \mt{Type}
\end{array}$$
The first argument is the same as above, and the second argument gives the key columns for just this constraint.

We have operations for assembling constraints into constraint sets.
$$\begin{array}{l}
  \mt{val} \; \mt{no\_constraint} : \mt{fs} ::: \{\mt{Type}\} \to \mt{sql\_constraints} \; \mt{fs} \; [] \\
  \mt{val} \; \mt{one\_constraint} : \mt{fs} ::: \{\mt{Type}\} \to \mt{unique} ::: \{\mt{Unit}\} \to \mt{name} :: \mt{Name} \\
  \hspace{.1in} \to \mt{sql\_constraint} \; \mt{fs} \; \mt{unique} \to \mt{sql\_constraints} \; \mt{fs} \; [\mt{name} = \mt{unique}] \\
  \mt{val} \; \mt{join\_constraints} : \mt{fs} ::: \{\mt{Type}\} \to \mt{uniques1} ::: \{\{\mt{Unit}\}\} \to \mt{uniques2} ::: \{\{\mt{Unit}\}\} \to [\mt{uniques1} \sim \mt{uniques2}] \\
  \hspace{.1in} \Rightarrow \mt{sql\_constraints} \; \mt{fs} \; \mt{uniques1} \to \mt{sql\_constraints} \; \mt{fs} \; \mt{uniques2} \to \mt{sql\_constraints} \; \mt{fs} \; (\mt{uniques1} \rc \mt{uniques2})
\end{array}$$

A \texttt{UNIQUE} constraint forces a set of columns to be a key, which means that no combination of column values may occur more than once in the table.  The $\mt{unique1}$ and $\mt{unique}$ arguments are separated out only to ensure that empty \texttt{UNIQUE} constraints are rejected.
$$\begin{array}{l}
  \mt{val} \; \mt{unique} : \mt{rest} ::: \{\mt{Type}\} \to \mt{t} ::: \mt{Type} \to \mt{unique1} :: \mt{Name} \to \mt{unique} :: \{\mt{Type}\} \\
  \hspace{.1in} \to [[\mt{unique1}] \sim \mt{unique}] \Rightarrow [[\mt{unique1} = \mt{t}] \rc \mt{unique} \sim \mt{rest}] \\
  \hspace{.1in} \Rightarrow \mt{sql\_constraint} \; ([\mt{unique1} = \mt{t}] \rc \mt{unique} \rc \mt{rest}) \; ([\mt{unique1}] \rc \mt{map} \; (\lambda \_ \Rightarrow ()) \; \mt{unique})
\end{array}$$

A \texttt{FOREIGN KEY} constraint connects a set of local columns to a local or remote key, enforcing that the local columns always reference an existent row of the foreign key's table.  A local column of type $\mt{t}$ may be linked to a foreign column of type $\mt{option} \; \mt{t}$, and vice versa.  We formalize that notion with a type class.
$$\begin{array}{l}
  \mt{class} \; \mt{linkable} :: \mt{Type} \to \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{linkable\_same} : \mt{t} ::: \mt{Type} \to \mt{linkable} \; \mt{t} \; \mt{t} \\
  \mt{val} \; \mt{linkable\_from\_nullable} : \mt{t} ::: \mt{Type} \to \mt{linkable} \; (\mt{option} \; \mt{t}) \; \mt{t} \\
  \mt{val} \; \mt{linkable\_to\_nullable} : \mt{t} ::: \mt{Type} \to \mt{linkable} \; \mt{t} \; (\mt{option} \; \mt{t})
\end{array}$$

The $\mt{matching}$ type family uses $\mt{linkable}$ to define when two keys match up type-wise.
$$\begin{array}{l}
  \mt{con} \; \mt{matching} :: \{\mt{Type}\} \to \{\mt{Type}\} \to \mt{Type} \\
  \mt{val} \; \mt{mat\_nil} : \mt{matching} \; [] \; [] \\
  \mt{val} \; \mt{mat\_cons} : \mt{t1} ::: \mt{Type} \to \mt{rest1} ::: \{\mt{Type}\} \to \mt{t2} ::: \mt{Type} \to \mt{rest2} ::: \{\mt{Type}\} \to \mt{nm1} :: \mt{Name} \to \mt{nm2} :: \mt{Name} \\
  \hspace{.1in} \to [[\mt{nm1}] \sim \mt{rest1}] \Rightarrow [[\mt{nm2}] \sim \mt{rest2}] \Rightarrow \mt{linkable} \; \mt{t1} \; \mt{t2} \to \mt{matching} \; \mt{rest1} \; \mt{rest2} \\
  \hspace{.1in} \to \mt{matching} \; ([\mt{nm1} = \mt{t1}] \rc \mt{rest1}) \; ([\mt{nm2} = \mt{t2}] \rc \mt{rest2})
\end{array}$$

SQL provides a number of different propagation modes for \texttt{FOREIGN KEY} constraints, governing what happens when a row containing a still-referenced foreign key value is deleted or modified to have a different key value.  The argument of a propagation mode's type gives the local key type.
$$\begin{array}{l}
  \mt{con} \; \mt{propagation\_mode} :: \{\mt{Type}\} \to \mt{Type} \\
  \mt{val} \; \mt{restrict} : \mt{fs} ::: \{\mt{Type}\} \to \mt{propagation\_mode} \; \mt{fs} \\
  \mt{val} \; \mt{cascade} : \mt{fs} ::: \{\mt{Type}\} \to \mt{propagation\_mode} \; \mt{fs} \\
  \mt{val} \; \mt{no\_action} : \mt{fs} ::: \{\mt{Type}\} \to \mt{propagation\_mode} \; \mt{fs} \\
  \mt{val} \; \mt{set\_null} : \mt{fs} ::: \{\mt{Type}\} \to \mt{propagation\_mode} \; (\mt{map} \; \mt{option} \; \mt{fs})
\end{array}$$

Finally, we put these ingredient together to define the \texttt{FOREIGN KEY} constraint function.
$$\begin{array}{l}
  \mt{val} \; \mt{foreign\_key} : \mt{mine1} ::: \mt{Name} \to \mt{t} ::: \mt{Type} \to \mt{mine} ::: \{\mt{Type}\} \to \mt{munused} ::: \{\mt{Type}\} \to \mt{foreign} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{funused} ::: \{\mt{Type}\} \to \mt{nm} ::: \mt{Name} \to \mt{uniques} ::: \{\{\mt{Unit}\}\} \\
  \hspace{.1in} \to [[\mt{mine1}] \sim \mt{mine}] \Rightarrow [[\mt{mine1} = \mt{t}] \rc \mt{mine} \sim \mt{munused}] \Rightarrow [\mt{foreign} \sim \mt{funused}] \Rightarrow [[\mt{nm}] \sim \mt{uniques}] \\
  \hspace{.1in} \Rightarrow \mt{matching} \; ([\mt{mine1} = \mt{t}] \rc \mt{mine}) \; \mt{foreign} \\
  \hspace{.1in} \to \mt{sql\_table} \; (\mt{foreign} \rc \mt{funused}) \; ([\mt{nm} = \mt{map} \; (\lambda \_ \Rightarrow ()) \; \mt{foreign}] \rc \mt{uniques}) \\
  \hspace{.1in} \to \{\mt{OnDelete} : \mt{propagation\_mode} \; ([\mt{mine1} = \mt{t}] \rc \mt{mine}), \\
  \hspace{.2in} \mt{OnUpdate} : \mt{propagation\_mode} \; ([\mt{mine1} = \mt{t}] \rc \mt{mine})\} \\
  \hspace{.1in} \to \mt{sql\_constraint} \; ([\mt{mine1} = \mt{t}] \rc \mt{mine} \rc \mt{munused}) \; []
\end{array}$$

The last kind of constraint is a \texttt{CHECK} constraint, which attaches a boolean invariant over a row's contents.  It is defined using the $\mt{sql\_exp}$ type family, which we discuss in more detail below.
$$\begin{array}{l}
  \mt{val} \; \mt{check} : \mt{fs} ::: \{\mt{Type}\} \to \mt{sql\_exp} \; [] \; [] \; \mt{fs} \; \mt{bool} \to \mt{sql\_constraint} \; \mt{fs} \; []
\end{array}$$

Section \ref{tables} shows the expanded syntax of the $\mt{table}$ declaration and signature item that includes constraints.  There is no other way to use constraints with SQL in Ur/Web.


\subsubsection{Queries}

A final query is constructed via the $\mt{sql\_query}$ function.  Constructor arguments respectively specify the unrestricted free table variables (which will only be available in subqueries), the free table variables that may only be mentioned within arguments to aggregate functions, table fields we select (as records mapping tables to the subsets of their fields that we choose), and the (always named) extra expressions that we select.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_query} :: \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\mt{Type}\} \to \mt{Type} \\
  \mt{val} \; \mt{sql\_query} : \mt{free} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{afree} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{tables} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{selectedFields} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{selectedExps} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to [\mt{free} \sim \mt{tables}] \\
  \hspace{.1in} \Rightarrow \{\mt{Rows} : \mt{sql\_query1} \; \mt{free} \; \mt{afree} \; \mt{tables} \; \mt{selectedFields} \; \mt{selectedExps}, \\
  \hspace{.2in} \mt{OrderBy} : \mt{sql\_order\_by} \; (\mt{free} \rc \mt{tables}) \; \mt{selectedExps}, \\
  \hspace{.2in} \mt{Limit} : \mt{sql\_limit}, \\
  \hspace{.2in} \mt{Offset} : \mt{sql\_offset}\} \\
  \hspace{.1in} \to \mt{sql\_query} \; \mt{free} \; \mt{afree} \; \mt{selectedFields} \; \mt{selectedExps}
\end{array}$$

Queries are used by folding over their results inside transactions.
$$\begin{array}{l}
  \mt{val} \; \mt{query} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to [\mt{tables} \sim \mt{exps}] \Rightarrow \mt{state} ::: \mt{Type} \to \mt{sql\_query} \; [] \; [] \; \mt{tables} \; \mt{exps} \\
  \hspace{.1in} \to (\$(\mt{exps} \rc \mt{map} \; (\lambda \mt{fields} :: \{\mt{Type}\} \Rightarrow \$\mt{fields}) \; \mt{tables}) \\
  \hspace{.2in} \to \mt{state} \to \mt{transaction} \; \mt{state}) \\
  \hspace{.1in} \to \mt{state} \to \mt{transaction} \; \mt{state}
\end{array}$$

Most of the complexity of the query encoding is in the type $\mt{sql\_query1}$, which includes simple queries and derived queries based on relational operators.  Constructor arguments respectively specify the unrestricted free table veriables, the aggregate-only free table variables, the tables we select from, the subset of fields that we keep from each table for the result rows, and the extra expressions that we select.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_query1} :: \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\mt{Type}\} \to \mt{Type} \\
  \\
  \mt{type} \; \mt{sql\_relop} \\
  \mt{val} \; \mt{sql\_union} : \mt{sql\_relop} \\
  \mt{val} \; \mt{sql\_intersect} : \mt{sql\_relop} \\
  \mt{val} \; \mt{sql\_except} : \mt{sql\_relop} \\
  \mt{val} \; \mt{sql\_relop} : \mt{free} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{afree} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{tables1} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{tables2} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{selectedFields} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{selectedExps} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{sql\_relop} \\
  \hspace{.1in} \to \mt{bool} \; (* \; \mt{ALL} \; *) \\
  \hspace{.1in} \to \mt{sql\_query1} \; \mt{free} \; \mt{afree} \; \mt{tables1} \; \mt{selectedFields} \; \mt{selectedExps} \\
  \hspace{.1in} \to \mt{sql\_query1} \; \mt{free} \; \mt{afree} \; \mt{tables2} \; \mt{selectedFields} \; \mt{selectedExps} \\
  \hspace{.1in} \to \mt{sql\_query1} \; \mt{free} \; \mt{afree} \; \mt{selectedFields} \; \mt{selectedFields} \; \mt{selectedExps}
\end{array}$$

$$\begin{array}{l}
  \mt{val} \; \mt{sql\_query1} : \mt{free} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{afree} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{tables} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{grouped} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{selectedFields} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{selectedExps} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{empties} :: \{\mt{Unit}\} \\
  \hspace{.1in} \to [\mt{free} \sim \mt{tables}] \\
  \hspace{.1in} \Rightarrow [\mt{free} \sim \mt{grouped}] \\
  \hspace{.1in} \Rightarrow [\mt{afree} \sim \mt{tables}] \\
  \hspace{.1in} \Rightarrow [\mt{empties} \sim \mt{selectedFields}] \\
  \hspace{.1in} \Rightarrow \{\mt{Distinct} : \mt{bool}, \\
  \hspace{.2in} \mt{From} : \mt{sql\_from\_items} \; \mt{free} \; \mt{tables}, \\
  \hspace{.2in} \mt{Where} : \mt{sql\_exp} \; (\mt{free} \rc \mt{tables}) \; \mt{afree} \; [] \; \mt{bool}, \\
  \hspace{.2in} \mt{GroupBy} : \mt{sql\_subset} \; \mt{tables} \; \mt{grouped}, \\
  \hspace{.2in} \mt{Having} : \mt{sql\_exp} \; (\mt{free} \rc \mt{grouped}) \; (\mt{afree} \rc \mt{tables}) \; [] \; \mt{bool}, \\
  \hspace{.2in} \mt{SelectFields} : \mt{sql\_subset} \; \mt{grouped} \; (\mt{map} \; (\lambda \_ \Rightarrow []) \; \mt{empties} \rc \mt{selectedFields}), \\
  \hspace{.2in} \mt {SelectExps} : \$(\mt{map} \; (\mt{sql\_expw} \; (\mt{free} \rc \mt{grouped}) \; (\mt{afree} \rc \mt{tables}) \; []) \; \mt{selectedExps}) \} \\
  \hspace{.1in} \to \mt{sql\_query1} \; \mt{free} \; \mt{afree} \; \mt{tables} \; \mt{selectedFields} \; \mt{selectedExps}
\end{array}$$

To encode projection of subsets of fields in $\mt{SELECT}$ clauses, and to encode $\mt{GROUP} \; \mt{BY}$ clauses, we rely on a type family $\mt{sql\_subset}$, capturing what it means for one record of table fields to be a subset of another.  The main constructor $\mt{sql\_subset}$ ``proves subset facts'' by requiring a split of a record into kept and dropped parts.  The extra constructor $\mt{sql\_subset\_all}$ is a convenience for keeping all fields of a record.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_subset} :: \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \mt{Type} \\
  \mt{val} \; \mt{sql\_subset} : \mt{keep\_drop} :: \{(\{\mt{Type}\} \times \{\mt{Type}\})\} \\
  \hspace{.1in} \to \mt{sql\_subset} \\
  \hspace{.2in} (\mt{map} \; (\lambda \mt{fields} :: (\{\mt{Type}\} \times \{\mt{Type}\}) \Rightarrow \mt{fields}.1 \rc \mt{fields}.2)\; \mt{keep\_drop}) \\
  \hspace{.2in} (\mt{map} \; (\lambda \mt{fields} :: (\{\mt{Type}\} \times \{\mt{Type}\}) \Rightarrow \mt{fields}.1) \; \mt{keep\_drop}) \\
\mt{val} \; \mt{sql\_subset\_all} : \mt{tables} :: \{\{\mt{Type}\}\} \to \mt{sql\_subset} \; \mt{tables} \; \mt{tables}
\end{array}$$

SQL expressions are used in several places, including $\mt{SELECT}$, $\mt{WHERE}$, $\mt{HAVING}$, and $\mt{ORDER} \; \mt{BY}$ clauses.  They reify a fragment of the standard SQL expression language, while making it possible to inject ``native'' Ur values in some places.  The arguments to the $\mt{sql\_exp}$ type family respectively give the unrestricted-availability table fields, the table fields that may only be used in arguments to aggregate functions, the available selected expressions, and the type of the expression.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_exp} :: \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\mt{Type}\} \to \mt{Type} \to \mt{Type}
\end{array}$$

Any field in scope may be converted to an expression.
$$\begin{array}{l}
  \mt{val} \; \mt{sql\_field} : \mt{otherTabs} ::: \{\{\mt{Type}\}\} \to \mt{otherFields} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{fieldType} ::: \mt{Type} \to \mt{agg} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{exps} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{tab} :: \mt{Name} \to \mt{field} :: \mt{Name} \\
  \hspace{.1in} \to \mt{sql\_exp} \; ([\mt{tab} = [\mt{field} = \mt{fieldType}] \rc \mt{otherFields}] \rc \mt{otherTabs}) \; \mt{agg} \; \mt{exps} \; \mt{fieldType}
\end{array}$$

There is an analogous function for referencing named expressions.
$$\begin{array}{l}
  \mt{val} \; \mt{sql\_exp} : \mt{tabs} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{t} ::: \mt{Type} \to \mt{rest} ::: \{\mt{Type}\} \to \mt{nm} :: \mt{Name} \\
  \hspace{.1in} \to \mt{sql\_exp} \; \mt{tabs} \; \mt{agg} \; ([\mt{nm} = \mt{t}] \rc \mt{rest}) \; \mt{t}
\end{array}$$

Ur values of appropriate types may be injected into SQL expressions.
$$\begin{array}{l}
  \mt{class} \; \mt{sql\_injectable\_prim} \\
  \mt{val} \; \mt{sql\_bool} : \mt{sql\_injectable\_prim} \; \mt{bool} \\
  \mt{val} \; \mt{sql\_int} : \mt{sql\_injectable\_prim} \; \mt{int} \\
  \mt{val} \; \mt{sql\_float} : \mt{sql\_injectable\_prim} \; \mt{float} \\
  \mt{val} \; \mt{sql\_string} : \mt{sql\_injectable\_prim} \; \mt{string} \\
  \mt{val} \; \mt{sql\_time} : \mt{sql\_injectable\_prim} \; \mt{time} \\
  \mt{val} \; \mt{sql\_blob} : \mt{sql\_injectable\_prim} \; \mt{blob} \\
  \mt{val} \; \mt{sql\_channel} : \mt{t} ::: \mt{Type} \to \mt{sql\_injectable\_prim} \; (\mt{channel} \; \mt{t}) \\
  \mt{val} \; \mt{sql\_client} : \mt{sql\_injectable\_prim} \; \mt{client} \\
  \\
  \mt{class} \; \mt{sql\_injectable} \\
  \mt{val} \; \mt{sql\_prim} : \mt{t} ::: \mt{Type} \to \mt{sql\_injectable\_prim} \; \mt{t} \to \mt{sql\_injectable} \; \mt{t} \\
  \mt{val} \; \mt{sql\_option\_prim} : \mt{t} ::: \mt{Type} \to \mt{sql\_injectable\_prim} \; \mt{t} \to \mt{sql\_injectable} \; (\mt{option} \; \mt{t}) \\
  \\
  \mt{val} \; \mt{sql\_inject} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{t} ::: \mt{Type} \to \mt{sql\_injectable} \; \mt{t} \\
  \hspace{.1in} \to \mt{t} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t}
\end{array}$$

Additionally, most function-free types may be injected safely, via the $\mt{serialized}$ type family.
$$\begin{array}{l}
  \mt{con} \; \mt{serialized} :: \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{serialize} : \mt{t} ::: \mt{Type} \to \mt{t} \to \mt{serialized} \; \mt{t} \\
  \mt{val} \; \mt{deserialize} : \mt{t} ::: \mt{Type} \to \mt{serialized} \; \mt{t} \to \mt{t} \\
  \mt{val} \; \mt{sql\_serialized} : \mt{t} ::: \mt{Type} \to \mt{sql\_injectable\_prim} \; (\mt{serialized} \; \mt{t})
\end{array}$$

We have the SQL nullness test, which is necessary because of the strange SQL semantics of equality in the presence of null values.
$$\begin{array}{l}
  \mt{val} \; \mt{sql\_is\_null} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{t} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; (\mt{option} \; \mt{t}) \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{bool}
\end{array}$$

As another way of dealing with null values, there is also a restricted form of the standard \cd{COALESCE} function.
$$\begin{array}{l}
  \mt{val} \; \mt{sql\_coalesce} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{t} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; (\mt{option} \; \mt{t}) \\
  \hspace{.1in} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t} \\
  \hspace{.1in} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t}
\end{array}$$

We have generic nullary, unary, and binary operators.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_nfunc} :: \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{sql\_current\_timestamp} : \mt{sql\_nfunc} \; \mt{time} \\
  \mt{val} \; \mt{sql\_nfunc} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{t} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_nfunc} \; \mt{t} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t} \\\end{array}$$

$$\begin{array}{l}
  \mt{con} \; \mt{sql\_unary} :: \mt{Type} \to \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{sql\_not} : \mt{sql\_unary} \; \mt{bool} \; \mt{bool} \\
  \mt{val} \; \mt{sql\_unary} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{arg} ::: \mt{Type} \to \mt{res} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_unary} \; \mt{arg} \; \mt{res} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{arg} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{res} \\
\end{array}$$

$$\begin{array}{l}
  \mt{con} \; \mt{sql\_binary} :: \mt{Type} \to \mt{Type} \to \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{sql\_and} : \mt{sql\_binary} \; \mt{bool} \; \mt{bool} \; \mt{bool} \\
  \mt{val} \; \mt{sql\_or} : \mt{sql\_binary} \; \mt{bool} \; \mt{bool} \; \mt{bool} \\
  \mt{val} \; \mt{sql\_binary} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{arg_1} ::: \mt{Type} \to \mt{arg_2} ::: \mt{Type} \to \mt{res} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_binary} \; \mt{arg_1} \; \mt{arg_2} \; \mt{res} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{arg_1} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{arg_2} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{res}
\end{array}$$

$$\begin{array}{l}
  \mt{class} \; \mt{sql\_arith} \\
  \mt{val} \; \mt{sql\_int\_arith} : \mt{sql\_arith} \; \mt{int} \\
  \mt{val} \; \mt{sql\_float\_arith} : \mt{sql\_arith} \; \mt{float} \\
  \mt{val} \; \mt{sql\_neg} : \mt{t} ::: \mt{Type} \to \mt{sql\_arith} \; \mt{t} \to \mt{sql\_unary} \; \mt{t} \; \mt{t} \\
  \mt{val} \; \mt{sql\_plus} : \mt{t} ::: \mt{Type} \to \mt{sql\_arith} \; \mt{t} \to \mt{sql\_binary} \; \mt{t} \; \mt{t} \; \mt{t} \\
  \mt{val} \; \mt{sql\_minus} : \mt{t} ::: \mt{Type} \to \mt{sql\_arith} \; \mt{t} \to \mt{sql\_binary} \; \mt{t} \; \mt{t} \; \mt{t} \\
  \mt{val} \; \mt{sql\_times} : \mt{t} ::: \mt{Type} \to \mt{sql\_arith} \; \mt{t} \to \mt{sql\_binary} \; \mt{t} \; \mt{t} \; \mt{t} \\
  \mt{val} \; \mt{sql\_div} : \mt{t} ::: \mt{Type} \to \mt{sql\_arith} \; \mt{t} \to \mt{sql\_binary} \; \mt{t} \; \mt{t} \; \mt{t} \\
  \mt{val} \; \mt{sql\_mod} : \mt{sql\_binary} \; \mt{int} \; \mt{int} \; \mt{int}
\end{array}$$

Finally, we have aggregate functions.  The $\mt{COUNT(\ast)}$ syntax is handled specially, since it takes no real argument.  The other aggregate functions are placed into a general type family, using constructor classes to restrict usage to properly typed arguments.  The key aspect of the $\mt{sql\_aggregate}$ function's type is the shift of aggregate-function-only fields into unrestricted fields.
$$\begin{array}{l}
  \mt{val} \; \mt{sql\_count} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{int}
\end{array}$$

$$\begin{array}{l}
  \mt{con} \; \mt{sql\_aggregate} :: \mt{Type} \to \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{sql\_aggregate} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{dom} ::: \mt{Type} \to \mt{ran} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_aggregate} \; \mt{dom} \; \mt{ran} \to \mt{sql\_exp} \; \mt{agg} \; \mt{agg} \; \mt{exps} \; \mt{dom} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{ran}
\end{array}$$

$$\begin{array}{l}
  \mt{val} \; \mt{sql\_count\_col} : \mt{t} ::: \mt{Type} \to \mt{sql\_aggregate} \; (\mt{option} \; \mt{t}) \; \mt{int}
\end{array}$$

Most aggregate functions are typed using a two-parameter constructor class $\mt{nullify}$ which maps $\mt{option}$ types to themselves and adds $\mt{option}$ to others.  That is, this constructor class represents the process of making an SQL type ``nullable.''
 
$$\begin{array}{l}
  \mt{class} \; \mt{sql\_summable} \\
  \mt{val} \; \mt{sql\_summable\_int} : \mt{sql\_summable} \; \mt{int} \\
  \mt{val} \; \mt{sql\_summable\_float} : \mt{sql\_summable} \; \mt{float} \\
  \mt{val} \; \mt{sql\_avg} : \mt{t} ::: \mt{Type} \to \mt{sql\_summable} \; \mt{t} \to \mt{sql\_aggregate} \; \mt{t} \; (\mt{option} \; \mt{float}) \\
  \mt{val} \; \mt{sql\_sum} : \mt{t} ::: \mt{Type} \to \mt{nt} ::: \mt{Type} \to \mt{sql\_summable} \; \mt{t} \to \mt{nullify} \; \mt{t} \; \mt{nt} \to \mt{sql\_aggregate} \; \mt{t} \; \mt{nt}
\end{array}$$

$$\begin{array}{l}
  \mt{class} \; \mt{sql\_maxable} \\
  \mt{val} \; \mt{sql\_maxable\_int} : \mt{sql\_maxable} \; \mt{int} \\
  \mt{val} \; \mt{sql\_maxable\_float} : \mt{sql\_maxable} \; \mt{float} \\
  \mt{val} \; \mt{sql\_maxable\_string} : \mt{sql\_maxable} \; \mt{string} \\
  \mt{val} \; \mt{sql\_maxable\_time} : \mt{sql\_maxable} \; \mt{time} \\
  \mt{val} \; \mt{sql\_max} : \mt{t} ::: \mt{Type} \to \mt{nt} ::: \mt{Type} \to \mt{sql\_maxable} \; \mt{t} \to \mt{nullify} \; \mt{t} \; \mt{nt} \to \mt{sql\_aggregate} \; \mt{t} \; \mt{nt} \\
  \mt{val} \; \mt{sql\_min} : \mt{t} ::: \mt{Type} \to \mt{nt} ::: \mt{Type} \to \mt{sql\_maxable} \; \mt{t} \to \mt{nullify} \; \mt{t} \; \mt{nt} \to \mt{sql\_aggregate} \; \mt{t} \; \mt{nt}
\end{array}$$

Any SQL query that returns single columns may be turned into a subquery expression.

$$\begin{array}{l}
\mt{val} \; \mt{sql\_subquery} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{nm} ::: \mt{Name} \to \mt{t} ::: \mt{Type} \to \mt{nt} ::: \mt{Type} \\
\hspace{.1in} \to \mt{nullify} \; \mt{t} \; \mt{nt} \to \mt{sql\_query} \; \mt{tables} \; \mt{agg} \; [\mt{nm} = \mt{t}] \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{nt}
\end{array}$$

There is also an \cd{IF..THEN..ELSE..} construct that is compiled into standard SQL \cd{CASE} expressions.
$$\begin{array}{l}
\mt{val} \; \mt{sql\_if\_then\_else} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{t} ::: \mt{Type} \\
\hspace{.1in} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{bool} \\
\hspace{.1in} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t} \\
\hspace{.1in} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t} \\
\hspace{.1in} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t}
\end{array}$$

\texttt{FROM} clauses are specified using a type family, whose arguments are the free table variables and the table variables bound by this clause.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_from\_items} :: \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \mt{Type} \\
  \mt{val} \; \mt{sql\_from\_table} : \mt{free} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to \mt{t} ::: \mt{Type} \to \mt{fs} ::: \{\mt{Type}\} \to \mt{fieldsOf} \; \mt{t} \; \mt{fs} \to \mt{name} :: \mt{Name} \to \mt{t} \to \mt{sql\_from\_items} \; \mt{free} \; [\mt{name} = \mt{fs}] \\
  \mt{val} \; \mt{sql\_from\_query} : \mt{free} ::: \{\{\mt{Type}\}\} \to \mt{fs} ::: \{\mt{Type}\} \to \mt{name} :: \mt{Name} \to \mt{sql\_query} \; \mt{free} \; [] \; \mt{fs} \to \mt{sql\_from\_items} \; \mt{free} \; [\mt{name} = \mt{fs}] \\
  \mt{val} \; \mt{sql\_from\_comma} : \mt{free} ::: \mt{tabs1} ::: \{\{\mt{Type}\}\} \to \mt{tabs2} ::: \{\{\mt{Type}\}\} \to [\mt{tabs1} \sim \mt{tabs2}] \\
  \hspace{.1in} \Rightarrow \mt{sql\_from\_items} \; \mt{free} \; \mt{tabs1} \to \mt{sql\_from\_items} \; \mt{free} \; \mt{tabs2} \\
  \hspace{.1in} \to \mt{sql\_from\_items} \; \mt{free} \; (\mt{tabs1} \rc \mt{tabs2}) \\
  \mt{val} \; \mt{sql\_inner\_join} : \mt{free} ::: \{\{\mt{Type}\}\} \to \mt{tabs1} ::: \{\{\mt{Type}\}\} \to \mt{tabs2} ::: \{\{\mt{Type}\}\} \\
  \hspace{.1in} \to [\mt{free} \sim \mt{tabs1}] \Rightarrow [\mt{free} \sim \mt{tabs2}] \Rightarrow [\mt{tabs1} \sim \mt{tabs2}] \\
  \hspace{.1in} \Rightarrow \mt{sql\_from\_items} \; \mt{free} \; \mt{tabs1} \to \mt{sql\_from\_items} \; \mt{free} \; \mt{tabs2} \\
  \hspace{.1in} \to \mt{sql\_exp} \; (\mt{free} \rc \mt{tabs1} \rc \mt{tabs2}) \; [] \; [] \; \mt{bool} \\
  \hspace{.1in} \to \mt{sql\_from\_items} \; \mt{free} \; (\mt{tabs1} \rc \mt{tabs2})
\end{array}$$

Besides these basic cases, outer joins are supported, which requires a type class for turning non-$\mt{option}$ columns into $\mt{option}$ columns.
$$\begin{array}{l}
  \mt{class} \; \mt{nullify} :: \mt{Type} \to \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{nullify\_option} : \mt{t} ::: \mt{Type} \to \mt{nullify} \; (\mt{option} \; \mt{t}) \; (\mt{option} \; \mt{t}) \\
  \mt{val} \; \mt{nullify\_prim} : \mt{t} ::: \mt{Type} \to \mt{sql\_injectable\_prim} \; \mt{t} \to \mt{nullify} \; \mt{t} \; (\mt{option} \; \mt{t})
\end{array}$$

Left, right, and full outer joins can now be expressed using functions that accept records of $\mt{nullify}$ instances.  Here, we give only the type for a left join as an example.

$$\begin{array}{l}
 \mt{val} \; \mt{sql\_left\_join} : \mt{free} ::: \{\{\mt{Type}\}\} \to \mt{tabs1} ::: \{\{\mt{Type}\}\} \to \mt{tabs2} ::: \{\{(\mt{Type} \times \mt{Type})\}\} \\
  \hspace{.1in} \to [\mt{free} \sim \mt{tabs1}] \Rightarrow [\mt{free} \sim \mt{tabs2}] \Rightarrow [\mt{tabs1} \sim \mt{tabs2}] \\
 \hspace{.1in} \Rightarrow \$(\mt{map} \; (\lambda \mt{r} \Rightarrow \$(\mt{map} \; (\lambda \mt{p} :: (\mt{Type} \times \mt{Type}) \Rightarrow \mt{nullify} \; \mt{p}.1 \; \mt{p}.2) \; \mt{r})) \; \mt{tabs2}) \\
 \hspace{.1in} \to \mt{sql\_from\_items} \; \mt{free} \; \mt{tabs1} \to \mt{sql\_from\_items} \; \mt{free} \; (\mt{map} \; (\mt{map} \; (\lambda \mt{p} :: (\mt{Type} \times \mt{Type}) \Rightarrow \mt{p}.1)) \; \mt{tabs2}) \\
 \hspace{.1in} \to \mt{sql\_exp} \; (\mt{free} \rc \mt{tabs1} \rc \mt{map} \; (\mt{map} \; (\lambda \mt{p} :: (\mt{Type} \times \mt{Type}) \Rightarrow \mt{p}.1)) \; \mt{tabs2}) \; [] \; [] \; \mt{bool} \\
 \hspace{.1in} \to \mt{sql\_from\_items} \; \mt{free} \; (\mt{tabs1} \rc \mt{map} \; (\mt{map} \; (\lambda \mt{p} :: (\mt{Type} \times \mt{Type}) \Rightarrow \mt{p}.2)) \; \mt{tabs2})
\end{array}$$

We wrap up the definition of query syntax with the types used in representing $\mt{ORDER} \; \mt{BY}$, $\mt{LIMIT}$, and $\mt{OFFSET}$ clauses.
$$\begin{array}{l}
  \mt{type} \; \mt{sql\_direction} \\
  \mt{val} \; \mt{sql\_asc} : \mt{sql\_direction} \\
  \mt{val} \; \mt{sql\_desc} : \mt{sql\_direction} \\
  \\
  \mt{con} \; \mt{sql\_order\_by} :: \{\{\mt{Type}\}\} \to \{\mt{Type}\} \to \mt{Type} \\
  \mt{val} \; \mt{sql\_order\_by\_Nil} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{exps} :: \{\mt{Type}\} \to \mt{sql\_order\_by} \; \mt{tables} \; \mt{exps} \\
  \mt{val} \; \mt{sql\_order\_by\_Cons} : \mt{tf} ::: (\{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\mt{Type}\} \to \mt{Type} \to \mt{Type}) \\
  \hspace{.1in} \to \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{t} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_window} \; \mt{tf} \to \mt{tf} \; \mt{tables} \; [] \; \mt{exps} \; \mt{t} \to \mt{sql\_direction} \to \mt{sql\_order\_by} \; \mt{tables} \; \mt{exps} \to \mt{sql\_order\_by} \; \mt{tables} \; \mt{exps} \\
  \mt{val} \; \mt{sql\_order\_by\_random} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{sql\_order\_by} \; \mt{tables} \; \mt{exps} \\
  \\
  \mt{type} \; \mt{sql\_limit} \\
  \mt{val} \; \mt{sql\_no\_limit} : \mt{sql\_limit} \\
  \mt{val} \; \mt{sql\_limit} : \mt{int} \to \mt{sql\_limit} \\
  \\
  \mt{type} \; \mt{sql\_offset} \\
  \mt{val} \; \mt{sql\_no\_offset} : \mt{sql\_offset} \\
  \mt{val} \; \mt{sql\_offset} : \mt{int} \to \mt{sql\_offset}
\end{array}$$

When using Postgres, \cd{SELECT} and \cd{ORDER BY} are allowed to contain top-level uses of \emph{window functions}.  A separate type family \cd{sql\_expw} is provided for such cases, with some type class convenience for overloading between normal and window expressions.
$$\begin{array}{l}
  \mt{con} \; \mt{sql\_expw} :: \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\mt{Type}\} \to \mt{Type} \to \mt{Type} \\
  \\
  \mt{class} \; \mt{sql\_window} :: (\{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\mt{Type}\} \to \mt{Type} \to \mt{Type}) \to \mt{Type} \\
  \mt{val} \; \mt{sql\_window\_normal} : \mt{sql\_window} \; \mt{sql\_exp} \\
  \mt{val} \; \mt{sql\_window\_fancy} : \mt{sql\_window} \; \mt{sql\_expw} \\
  \mt{val} \; \mt{sql\_window} : \mt{tf} ::: (\{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\mt{Type}\} \to \mt{Type} \to \mt{Type}) \\
  \hspace{.1in} \to \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{t} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_window} \; \mt{tf} \\
  \hspace{.1in} \to \mt{tf} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t} \\
  \hspace{.1in} \to \mt{sql\_expw} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t} \\
  \\
  \mt{con} \; \mt{sql\_partition} :: \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\mt{Type}\} \to \mt{Type} \\
  \mt{val} \; \mt{sql\_no\_partition} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{sql\_partition} \; \mt{tables} \; \mt{agg} \; \mt{exps} \\
  \mt{val} \; \mt{sql\_partition} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \to \mt{t} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t} \\
  \hspace{.1in} \to \mt{sql\_partition} \; \mt{tables} \; \mt{agg} \; \mt{exps} \\
  \\
  \mt{con} \; \mt{sql\_window\_function} :: \{\{\mt{Type}\}\} \to \{\{\mt{Type}\}\} \to \{\mt{Type}\} \to \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{sql\_window\_function} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{t} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_window\_function} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t} \\
  \hspace{.1in} \to \mt{sql\_partition} \; \mt{tables} \; \mt{agg} \; \mt{exps} \\
  \hspace{.1in} \to \mt{sql\_order\_by} \; \mt{tables} \; \mt{exps} \\
  \hspace{.1in} \to \mt{sql\_expw} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t} \\
  \\
  \mt{val} \; \mt{sql\_window\_aggregate} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{t} ::: \mt{Type} \to \mt{nt} ::: \mt{Type} \\
  \hspace{.1in} \to \mt{sql\_aggregate} \; \mt{t} \; \mt{nt} \\
  \hspace{.1in} \to \mt{sql\_exp} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{t} \\
  \hspace{.1in} \to \mt{sql\_window\_function} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{nt} \\
  \mt{val} \; \mt{sql\_window\_count} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{sql\_window\_function} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{int} \\
  \mt{val} \; \mt{sql\_rank} : \mt{tables} ::: \{\{\mt{Type}\}\} \to \mt{agg} ::: \{\{\mt{Type}\}\} \to \mt{exps} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to \mt{sql\_window\_function} \; \mt{tables} \; \mt{agg} \; \mt{exps} \; \mt{int}
\end{array}$$


\subsubsection{DML}

The Ur/Web library also includes an embedding of a fragment of SQL's DML, the Data Manipulation Language, for modifying database tables.  Any piece of DML may be executed in a transaction.

$$\begin{array}{l}
  \mt{type} \; \mt{dml} \\
  \mt{val} \; \mt{dml} : \mt{dml} \to \mt{transaction} \; \mt{unit}
\end{array}$$

The function $\mt{Basis.dml}$ will trigger a fatal application error if the command fails, for instance, because a data integrity constraint is violated.  An alternate function returns an error message as a string instead.

$$\begin{array}{l}
  \mt{val} \; \mt{tryDml} : \mt{dml} \to \mt{transaction} \; (\mt{option} \; \mt{string})
\end{array}$$

Properly typed records may be used to form $\mt{INSERT}$ commands.
$$\begin{array}{l}
  \mt{val} \; \mt{insert} : \mt{fields} ::: \{\mt{Type}\} \to \mt{sql\_table} \; \mt{fields} \\
  \hspace{.1in} \to \$(\mt{map} \; (\mt{sql\_exp} \; [] \; [] \; []) \; \mt{fields}) \to \mt{dml}
\end{array}$$

An $\mt{UPDATE}$ command is formed from a choice of which table fields to leave alone and which to change, along with an expression to use to compute the new value of each changed field and a $\mt{WHERE}$ clause.  Note that, in the table environment applied to expressions, the table being updated is hardcoded at the name $\mt{T}$.  The parsing extension for $\mt{UPDATE}$ will elaborate all table-free field references to use table variable $\mt{T}$.
$$\begin{array}{l}
  \mt{val} \; \mt{update} : \mt{unchanged} ::: \{\mt{Type}\} \to \mt{changed} :: \{\mt{Type}\} \to [\mt{changed} \sim \mt{unchanged}] \\
  \hspace{.1in} \Rightarrow \$(\mt{map} \; (\mt{sql\_exp} \; [\mt{T} = \mt{changed} \rc \mt{unchanged}] \; [] \; []) \; \mt{changed}) \\
  \hspace{.1in} \to \mt{sql\_table} \; (\mt{changed} \rc \mt{unchanged}) \to \mt{sql\_exp} \; [\mt{T} = \mt{changed} \rc \mt{unchanged}] \; [] \; [] \; \mt{bool} \to \mt{dml}
\end{array}$$

A $\mt{DELETE}$ command is formed from a table and a $\mt{WHERE}$ clause.  The above use of $\mt{T}$ is repeated.
$$\begin{array}{l}
  \mt{val} \; \mt{delete} : \mt{fields} ::: \{\mt{Type}\} \to \mt{sql\_table} \; \mt{fields} \to \mt{sql\_exp} \; [\mt{T} = \mt{fields}] \; [] \; [] \; \mt{bool} \to \mt{dml}
\end{array}$$

\subsubsection{Sequences}

SQL sequences are counters with concurrency control, often used to assign unique IDs.  Ur/Web supports them via a simple interface.  The only way to create a sequence is with the $\mt{sequence}$ declaration form.

$$\begin{array}{l}
  \mt{type} \; \mt{sql\_sequence} \\
  \mt{val} \; \mt{nextval} : \mt{sql\_sequence} \to \mt{transaction} \; \mt{int} \\
  \mt{val} \; \mt{setval} : \mt{sql\_sequence} \to \mt{int} \to \mt{transaction} \; \mt{unit}
\end{array}$$


\subsection{\label{xml}XML}

Ur/Web's library contains an encoding of XML syntax and semantic constraints.  We make no effort to follow the standards governing XML schemas.  Rather, XML fragments are viewed more as values of ML datatypes, and we only track which tags are allowed inside which other tags.  The Ur/Web standard library encodes a very loose version of XHTML, where it is very easy to produce documents which are invalid XHTML, but which still display properly in all major browsers.  The main purposes of the invariants that are enforced are first, to provide some documentation about the places where it would make sense to insert XML fragments; and second, to rule out code injection attacks and other abstraction violations related to HTML syntax.

The basic XML type family has arguments respectively indicating the \emph{context} of a fragment, the fields that the fragment expects to be bound on entry (and their types), and the fields that the fragment will bind (and their types).  Contexts are a record-based ``poor man's subtyping'' encoding, with each possible set of valid tags corresponding to a different context record.  For instance, the context for the \texttt{<td>} tag is $[\mt{Dyn}, \mt{MakeForm}, \mt{Tr}]$, to indicate nesting inside a \texttt{<tr>} tag with the ability to nest \texttt{<form>} and \texttt{<dyn>} tags (see below).  Contexts are maintained in a somewhat ad-hoc way; the only definitive reference for their meanings is the types of the tag values in \texttt{basis.urs}.  The arguments dealing with field binding are only relevant to HTML forms.
$$\begin{array}{l}
  \mt{con} \; \mt{xml} :: \{\mt{Unit}\} \to \{\mt{Type}\} \to \{\mt{Type}\} \to \mt{Type}
\end{array}$$

We also have a type family of XML tags, indexed respectively by the record of optional attributes accepted by the tag, the context in which the tag may be placed, the context required of children of the tag, which form fields the tag uses, and which fields the tag defines.
$$\begin{array}{l}
  \mt{con} \; \mt{tag} :: \{\mt{Type}\} \to \{\mt{Unit}\} \to \{\mt{Unit}\} \to \{\mt{Type}\} \to \{\mt{Type}\} \to \mt{Type}
\end{array}$$

Literal text may be injected into XML as ``CDATA.''
$$\begin{array}{l}
  \mt{val} \; \mt{cdata} : \mt{ctx} ::: \{\mt{Unit}\} \to \mt{use} ::: \{\mt{Type}\} \to \mt{string} \to \mt{xml} \; \mt{ctx} \; \mt{use} \; []
\end{array}$$

There is also a function to insert the literal value of a character.  Since Ur/Web uses the UTF-8 text encoding, the $\mt{cdata}$ function is only sufficient to encode characters with ASCII codes below 128.  Higher codes have alternate meanings in UTF-8 than in usual ASCII, so this alternate function should be used with them.
$$\begin{array}{l}
  \mt{val} \; \mt{cdataChar} : \mt{ctx} ::: \{\mt{Unit}\} \to \mt{use} ::: \{\mt{Type}\} \to \mt{char} \to \mt{xml} \; \mt{ctx} \; \mt{use} \; []
\end{array}$$

There is a function for producing an XML tree with a particular tag at its root.
$$\begin{array}{l}
  \mt{val} \; \mt{tag} : \mt{attrsGiven} ::: \{\mt{Type}\} \to \mt{attrsAbsent} ::: \{\mt{Type}\} \to \mt{ctxOuter} ::: \{\mt{Unit}\} \to \mt{ctxInner} ::: \{\mt{Unit}\} \\
  \hspace{.1in} \to \mt{useOuter} ::: \{\mt{Type}\} \to \mt{useInner} ::: \{\mt{Type}\} \to \mt{bindOuter} ::: \{\mt{Type}\} \to \mt{bindInner} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to [\mt{attrsGiven} \sim \mt{attrsAbsent}] \Rightarrow [\mt{useOuter} \sim \mt{useInner}] \Rightarrow [\mt{bindOuter} \sim \mt{bindInner}] \\
  \hspace{.1in} \Rightarrow \mt{css\_class} \\
  \hspace{.1in} \to \mt{option} \; (\mt{signal} \; \mt{css\_class}) \\
  \hspace{.1in} \to \mt{css\_style} \\
  \hspace{.1in} \to \mt{option} \; (\mt{signal} \; \mt{css\_style}) \\
  \hspace{.1in} \to \$\mt{attrsGiven} \\
  \hspace{.1in} \to \mt{tag} \; (\mt{attrsGiven} \rc \mt{attrsAbsent}) \; \mt{ctxOuter} \; \mt{ctxInner} \; \mt{useOuter} \; \mt{bindOuter} \\
  \hspace{.1in} \to \mt{xml} \; \mt{ctxInner} \; \mt{useInner} \; \mt{bindInner} \to \mt{xml} \; \mt{ctxOuter} \; (\mt{useOuter} \rc \mt{useInner}) \; (\mt{bindOuter} \rc \mt{bindInner})
\end{array}$$
Note that any tag may be assigned a CSS class, or left without a class by passing $\mt{Basis.null}$ as the first value-level argument.  This is the sole way of making use of the values produced by $\mt{style}$ declarations.  The function $\mt{Basis.classes}$ can be used to specify a list of CSS classes for a single tag.  Stylesheets to assign properties to the classes can be linked via URL's with \texttt{link} tags.  Ur/Web makes it easy to calculate upper bounds on usage of CSS classes through program analysis, with the \cd{-css} command-line flag.

Also note that two different arguments are available for setting CSS classes: the first, associated with the \texttt{class} pseudo-attribute syntactic sugar, fixes the class of a tag for the duration of the tag's life; while the second, associated with the \texttt{dynClass} pseudo-attribute, allows the class to vary over the tag's life.  See Section \ref{signals} for an introduction to the $\mt{signal}$ type family.

The third and fourth value-level arguments makes it possible to generate HTML \cd{style} attributes, either with fixed content (\cd{style} attribute) or dynamic content (\cd{dynStyle} pseudo-attribute).

Two XML fragments may be concatenated.
$$\begin{array}{l}
  \mt{val} \; \mt{join} : \mt{ctx} ::: \{\mt{Unit}\} \to \mt{use_1} ::: \{\mt{Type}\} \to \mt{bind_1} ::: \{\mt{Type}\} \to \mt{bind_2} ::: \{\mt{Type}\} \\
  \hspace{.1in} \to [\mt{use_1} \sim \mt{bind_1}] \Rightarrow [\mt{bind_1} \sim \mt{bind_2}] \\
  \hspace{.1in} \Rightarrow \mt{xml} \; \mt{ctx} \; \mt{use_1} \; \mt{bind_1} \to \mt{xml} \; \mt{ctx} \; (\mt{use_1} \rc \mt{bind_1}) \; \mt{bind_2} \to \mt{xml} \; \mt{ctx} \; \mt{use_1} \; (\mt{bind_1} \rc \mt{bind_2})
\end{array}$$

Finally, any XML fragment may be updated to ``claim'' to use more form fields than it does.
$$\begin{array}{l}
  \mt{val} \; \mt{useMore} : \mt{ctx} ::: \{\mt{Unit}\} \to \mt{use_1} ::: \{\mt{Type}\} \to \mt{use_2} ::: \{\mt{Type}\} \to \mt{bind} ::: \{\mt{Type}\} \to [\mt{use_1} \sim \mt{use_2}] \\
  \hspace{.1in} \Rightarrow \mt{xml} \; \mt{ctx} \; \mt{use_1} \; \mt{bind} \to \mt{xml} \; \mt{ctx} \; (\mt{use_1} \rc \mt{use_2}) \; \mt{bind}
\end{array}$$

We will not list here the different HTML tags and related functions from the standard library.  They should be easy enough to understand from the code in \texttt{basis.urs}.  The set of tags in the library is not yet claimed to be complete for HTML standards.  Also note that there is currently no way for the programmer to add his own tags.  It \emph{is} possible to add new tags directly to \texttt{basis.urs}, but this should only be done as a prelude to suggesting a patch to the main distribution.

One last useful function is for aborting any page generation, returning some XML as an error message.  This function takes the place of some uses of a general exception mechanism.
$$\begin{array}{l}
  \mt{val} \; \mt{error} : \mt{t} ::: \mt{Type} \to \mt{xbody} \to \mt{t}
\end{array}$$


\subsection{Client-Side Programming}

Ur/Web supports running code on web browsers, via automatic compilation to JavaScript.

\subsubsection{The Basics}

All of the functions in this subsection are client-side only.

Clients can open alert and confirm dialog boxes, in the usual annoying JavaScript way.
$$\begin{array}{l}
  \mt{val} \; \mt{alert} : \mt{string} \to \mt{transaction} \; \mt{unit} \\
  \mt{val} \; \mt{confirm} : \mt{string} \to \mt{transaction} \; \mt{bool}
\end{array}$$

Any transaction may be run in a new thread with the $\mt{spawn}$ function.
$$\begin{array}{l}
  \mt{val} \; \mt{spawn} : \mt{transaction} \; \mt{unit} \to \mt{transaction} \; \mt{unit}
\end{array}$$

The current thread can be paused for at least a specified number of milliseconds.
$$\begin{array}{l}
  \mt{val} \; \mt{sleep} : \mt{int} \to \mt{transaction} \; \mt{unit}
\end{array}$$

A few functions are available to registers callbacks for particular error events.  Respectively, they are triggered on calls to $\mt{error}$, uncaught JavaScript exceptions, failure of remote procedure calls, the severance of the connection serving asynchronous messages, or the occurrence of some other error with that connection.  If no handlers are registered for a kind of error, then a JavaScript \cd{alert()} is used to announce its occurrence.  When one of these functions is called multiple times within a single page, all registered handlers are run when appropriate events occur, with handlers run in the reverse of their registration order.
$$\begin{array}{l}
  \mt{val} \; \mt{onError} : (\mt{xbody} \to \mt{transaction} \; \mt{unit}) \to \mt{transaction} \; \mt{unit} \\
  \mt{val} \; \mt{onFail} : (\mt{string} \to \mt{transaction} \; \mt{unit}) \to \mt{transaction} \; \mt{unit} \\
  \mt{val} \; \mt{onConnectFail} : \mt{transaction} \; \mt{unit} \to \mt{transaction} \; \mt{unit} \\
  \mt{val} \; \mt{onDisconnect} : \mt{transaction} \; \mt{unit} \to \mt{transaction} \; \mt{unit} \\
  \mt{val} \; \mt{onServerError} : (\mt{string} \to \mt{transaction} \; \mt{unit}) \to \mt{transaction} \; \mt{unit}
\end{array}$$

There are also functions to register standard document-level event handlers.

$$\begin{array}{l}
 \mt{val} \; \mt{onClick} : (\mt{mouseEvent} \to \mt{transaction} \; \mt{unit}) \to \mt{transaction} \; \mt{unit} \\
 \mt{val} \; \mt{onDblclick} : (\mt{mouseEvent} \to \mt{transaction} \; \mt{unit}) \to \mt{transaction} \; \mt{unit} \\
 \mt{val} \; \mt{onKeydown} : (\mt{keyEvent} \to \mt{transaction} \; \mt{unit}) \to \mt{transaction} \; \mt{unit} \\
 \mt{val} \; \mt{onKeypress} : (\mt{keyEvent} \to \mt{transaction} \; \mt{unit}) \to \mt{transaction} \; \mt{unit} \\
 \mt{val} \; \mt{onKeyup} : (\mt{keyEvent} \to \mt{transaction} \; \mt{unit}) \to \mt{transaction} \; \mt{unit} \\
 \mt{val} \; \mt{onMousedown} : (\mt{mouseEvent} \to \mt{transaction} \; \mt{unit}) \to \mt{transaction} \; \mt{unit} \\
 \mt{val} \; \mt{onMouseup} : (\mt{mouseEvent} \to \mt{transaction} \; \mt{unit}) \to \mt{transaction} \; \mt{unit}
\end{array}$$

Versions of standard JavaScript functions are provided that event handlers may call to mask default handling or prevent bubbling of events up to parent DOM nodes, respectively.

$$\begin{array}{l}
  \mt{val} \; \mt{preventDefault} : \mt{transaction} \; \mt{unit} \\
  \mt{val} \; \mt{stopPropagation} : \mt{transaction} \; \mt{unit}
\end{array}$$

\subsubsection{Node IDs}

There is an abstract type of node IDs that may be assigned to \cd{id} attributes of most HTML tags.

$$\begin{array}{l}
  \mt{type} \; \mt{id} \\
  \mt{val} \; \mt{fresh} : \mt{transaction} \; \mt{id}
\end{array}$$

The \cd{fresh} function is allowed on both server and client, but there is no other way to create IDs, which includes lack of a way to force an ID to match a particular string.  The main semantic importance of IDs within Ur/Web is in uses of the HTML \cd{<label>} tag.  IDs play a much more central role in mainstream JavaScript programming, but Ur/Web uses a very different model to enable changes to particular nodes of a page tree, as the next manual subsection explains.  IDs may still be useful in interfacing with JavaScript code (for instance, through Ur/Web's FFI).

One further use of IDs is as handles for requesting that \emph{focus} be given to specific tags.

$$\begin{array}{l}
  \mt{val} \; \mt{giveFocus} : \mt{id} \to \mt{transaction} \; \mt{unit}
\end{array}$$

\subsubsection{\label{signals}Functional-Reactive Page Generation}

Most approaches to ``AJAX''-style coding involve imperative manipulation of the DOM tree representing an HTML document's structure.  Ur/Web follows the \emph{functional-reactive} approach instead.  Programs may allocate mutable \emph{sources} of arbitrary types, and an HTML page is effectively a pure function over the latest values of the sources.  The page is not mutated directly, but rather it changes automatically as the sources are mutated.

More operationally, you can think of a source as a mutable cell with facilities for subscription to change notifications.  That level of detail is hidden behind a monadic facility to be described below.  First, there are three primitive operations for working with sources just as if they were ML \cd{ref} cells, corresponding to ML's \cd{ref}, \cd{:=}, and \cd{!} operations.

$$\begin{array}{l}
  \mt{con} \; \mt{source} :: \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{source} : \mt{t} ::: \mt{Type} \to \mt{t} \to \mt{transaction} \; (\mt{source} \; \mt{t}) \\
  \mt{val} \; \mt{set} : \mt{t} ::: \mt{Type} \to \mt{source} \; \mt{t} \to \mt{t} \to \mt{transaction} \; \mt{unit} \\
  \mt{val} \; \mt{get} : \mt{t} ::: \mt{Type} \to \mt{source} \; \mt{t} \to \mt{transaction} \; \mt{t}
\end{array}$$

Only source creation and setting are supported server-side, as a convenience to help in setting up a page, where you may wish to allocate many sources that will be referenced through the page.  All server-side storage of values inside sources uses string serializations of values, while client-side storage uses normal JavaScript values.

Pure functions over arbitrary numbers of sources are represented in a monad of \emph{signals}, which may only be used in client-side code.  This is presented to the programmer in the form of a monad $\mt{signal}$, each of whose values represents (conceptually) some pure function over all sources that may be allocated in the course of program execution.  A monad operation $\mt{signal}$ denotes the identity function over a particular source.  By using $\mt{signal}$ on a source, you implicitly subscribe to change notifications for that source.  That is, your signal will automatically be recomputed as that source changes.  The usual monad operators make it possible to build up complex signals that depend on multiple sources; automatic updating upon source-value changes still happens automatically.  There is also an operator for computing a signal's current value within a transaction.

$$\begin{array}{l}
  \mt{con} \; \mt{signal} :: \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{signal\_monad} : \mt{monad} \; \mt{signal} \\
  \mt{val} \; \mt{signal} : \mt{t} ::: \mt{Type} \to \mt{source} \; \mt{t} \to \mt{signal} \; \mt{t} \\
  \mt{val} \; \mt{current} : \mt{t} ::: \mt{Type} \to \mt{signal} \; \mt{t} \to \mt{transaction} \; \mt{t}
\end{array}$$

A reactive portion of an HTML page is injected with a $\mt{dyn}$ tag, which has a signal-valued attribute $\mt{Signal}$.

$$\begin{array}{l}
  \mt{val} \; \mt{dyn} : \mt{ctx} ::: \{\mt{Unit}\} \to \mt{use} ::: \{\mt{Type}\} \to \mt{bind} ::: \{\mt{Type}\} \to [\mt{ctx} \sim [\mt{Dyn}]] \Rightarrow \mt{unit} \\
  \hspace{.1in} \to \mt{tag} \; [\mt{Signal} = \mt{signal} \; (\mt{xml} \; ([\mt{Dyn}] \rc \mt{ctx}) \; \mt{use} \; \mt{bind})] \; ([\mt{Dyn}] \rc \mt{ctx}) \; [] \; \mt{use} \; \mt{bind}
\end{array}$$

The semantics of \cd{<dyn>} tags is somewhat subtle.  When the signal associated with such a tag changes value, the associated subtree of the HTML page is recreated.  Some properties of the subtree, such as attributes and client-side widget values, are specified explicitly in the signal value, so these may be counted on to remain the same after recreation.  Other properties, like focus and cursor position within textboxes, are \emph{not} specified by signal values, and these properties will be \emph{reset} upon subtree regeneration.  Furthermore, user interaction with widgets may not work properly during regeneration.  For instance, clicking a button while it is being regenerated may not trigger its \cd{onclick} event code.

Currently, the only way to avoid undesired resets is to avoid regeneration of containing subtrees.  There are two main strategies for achieving that goal.  First, when changes to a subtree can be confined to CSS classes of tags, the \texttt{dynClass} pseudo-attribute may be used instead (see Section \ref{xml}), as it does not regenerate subtrees.  Second, a single \cd{<dyn>} tag may be broken into multiple tags, in a way that makes finer-grained dependency structure explicit.  This latter strategy can avoid ``spurious'' regenerations that are not actually required to achieve the intended semantics.

Transactions can be run on the client by including them in attributes like the $\mt{Onclick}$ attribute of $\mt{button}$, and GUI widgets like $\mt{ctextbox}$ have $\mt{Source}$ attributes that can be used to connect them to sources, so that their values can be read by code running because of, e.g., an $\mt{Onclick}$ event.  It is also possible to create an ``active'' HTML fragment that runs a $\mt{transaction}$ to determine its content, possibly allocating some sources in the process:

$$\begin{array}{l}
  \mt{val} \; \mt{active} : \mt{unit} \to \mt{tag} \; [\mt{Code} = \mt{transaction} \; \mt{xbody}] \; \mt{body} \; [] \; [] \; []
\end{array}$$

\subsubsection{Remote Procedure Calls}

Any function call may be made a client-to-server ``remote procedure call'' if the function being called needs no features that are only available to client code.  To make a function call an RPC, pass that function call as the argument to $\mt{Basis.rpc}$:

$$\begin{array}{l}
  \mt{val} \; \mt{rpc} : \mt{t} ::: \mt{Type} \to \mt{transaction} \; \mt{t} \to \mt{transaction} \; \mt{t}
\end{array}$$

\subsubsection{Asynchronous Message-Passing}

To support asynchronous, ``server push'' delivery of messages to clients, any client that might need to receive an asynchronous message is assigned a unique ID.  These IDs may be retrieved both on the client and on the server, during execution of code related to a client.

$$\begin{array}{l}
  \mt{type} \; \mt{client} \\
  \mt{val} \; \mt{self} : \mt{transaction} \; \mt{client}
\end{array}$$

\emph{Channels} are the means of message-passing.  Each channel is created in the context of a client and belongs to that client; no other client may receive the channel's messages.  Each channel type includes the type of values that may be sent over the channel.  Sending and receiving are asynchronous, in the sense that a client need not be ready to receive a message right away.  Rather, sent messages may queue up, waiting to be processed.

$$\begin{array}{l}
  \mt{con} \; \mt{channel} :: \mt{Type} \to \mt{Type} \\
  \mt{val} \; \mt{channel} : \mt{t} ::: \mt{Type} \to \mt{transaction} \; (\mt{channel} \; \mt{t}) \\
  \mt{val} \; \mt{send} : \mt{t} ::: \mt{Type} \to \mt{channel} \; \mt{t} \to \mt{t} \to \mt{transaction} \; \mt{unit} \\
  \mt{val} \; \mt{recv} : \mt{t} ::: \mt{Type} \to \mt{channel} \; \mt{t} \to \mt{transaction} \; \mt{t}
\end{array}$$

The $\mt{channel}$ and $\mt{send}$ operations may only be executed on the server, and $\mt{recv}$ may only be executed on a client.  Neither clients nor channels may be passed as arguments from clients to server-side functions, so persistent channels can only be maintained by storing them in the database and looking them up using the current client ID or some application-specific value as a key.

Clients and channels live only as long as the web browser page views that they are associated with.  When a user surfs away, his client and its channels will be garbage-collected, after that user is not heard from for the timeout period.  Garbage collection deletes any database row that contains a client or channel directly.  Any reference to one of these types inside an $\mt{option}$ is set to $\mt{None}$ instead.  Both kinds of handling have the flavor of weak pointers, and that is a useful way to think about clients and channels in the database.

\emph{Note}: Currently, there are known concurrency issues with multi-threaded applications that employ message-passing on top of database engines that don't support true serializable transactions.  Postgres 9.1 is the only supported engine that does this properly.


\section{Ur/Web Syntax Extensions}

Ur/Web features some syntactic shorthands for building values using the functions from the last section.  This section sketches the grammar of those extensions.  We write spans of syntax inside brackets to indicate that they are optional.

\subsection{SQL}

\subsubsection{\label{tables}Table Declarations}

$\mt{table}$ declarations may include constraints, via these grammar rules.
$$\begin{array}{rrcll}
  \textrm{Declarations} & d &::=& \mt{table} \; x : c \; [pk[,]] \; cts \mid \mt{view} \; x = V \\
  \textrm{Primary key constraints} & pk &::=& \mt{PRIMARY} \; \mt{KEY} \; K \\
  \textrm{Keys} & K &::=& f \mid (f, (f,)^+) \mid \{\{e\}\} \\
  \textrm{Constraint sets} & cts &::=& \mt{CONSTRAINT} f \; ct \mid cts, cts \mid \{\{e\}\} \\
  \textrm{Constraints} & ct &::=& \mt{UNIQUE} \; K \mid \mt{CHECK} \; E \\
  &&& \mid \mt{FOREIGN} \; \mt{KEY} \; K \; \mt{REFERENCES} \; F \; (K) \; [\mt{ON} \; \mt{DELETE} \; pr] \; [\mt{ON} \; \mt{UPDATE} \; pr] \\
  \textrm{Foreign tables} & F &::=& x \mid \{\{e\}\} \\
  \textrm{Propagation modes} & pr &::=& \mt{NO} \; \mt{ACTION} \mid \mt{RESTRICT} \mid \mt{CASCADE} \mid \mt{SET} \; \mt{NULL} \\
  \textrm{View expressions} & V &::=& Q \mid \{e\}
\end{array}$$

A signature item $\mt{table} \; \mt{x} : \mt{c}$ is actually elaborated into two signature items: $\mt{con} \; \mt{x\_hidden\_constraints} :: \{\{\mt{Unit}\}\}$ and $\mt{val} \; \mt{x} : \mt{sql\_table} \; \mt{c} \; \mt{x\_hidden\_constraints}$.  This is appropriate for common cases where client code doesn't care which keys a table has.  It's also possible to include constraints after a $\mt{table}$ signature item, with the same syntax as for $\mt{table}$ declarations.  This may look like dependent typing, but it's just a convenience.  The constraints are type-checked to determine a constructor $u$ to include in $\mt{val} \; \mt{x} : \mt{sql\_table} \; \mt{c} \; (u \rc \mt{x\_hidden\_constraints})$, and then the expressions are thrown away.  Nonetheless, it can be useful for documentation purposes to include table constraint details in signatures.  Note that the automatic generation of $\mt{x\_hidden\_constraints}$ leads to a kind of free subtyping with respect to which constraints are defined.


\subsubsection{Queries}

Queries $Q$ are added to the rules for expressions $e$.

$$\begin{array}{rrcll}
  \textrm{Queries} & Q &::=& (q \; [\mt{ORDER} \; \mt{BY} \; O] \; [\mt{LIMIT} \; N] \; [\mt{OFFSET} \; N]) \\
  \textrm{Pre-queries} & q &::=& \mt{SELECT} \; [\mt{DISTINCT}] \; P \; \mt{FROM} \; F,^+ \; [\mt{WHERE} \; E] \; [\mt{GROUP} \; \mt{BY} \; p,^+] \; [\mt{HAVING} \; E] \\
  &&& \mid q \; R \; q \mid \{\{\{e\}\}\} \\
  \textrm{Relational operators} & R &::=& \mt{UNION} \mid \mt{INTERSECT} \mid \mt{EXCEPT} \\
  \textrm{$\mt{ORDER \; BY}$ items} & O &::=& \mt{RANDOM} [()] \mid \hat{E} \; [o] \mid \hat{E} \; [o], O
\end{array}$$

$$\begin{array}{rrcll}
  \textrm{Projections} & P &::=& \ast & \textrm{all columns} \\
  &&& p,^+ & \textrm{particular columns} \\
  \textrm{Pre-projections} & p &::=& t.f & \textrm{one column from a table} \\
  &&& t.\{\{c\}\} & \textrm{a record of columns from a table (of kind $\{\mt{Type}\}$)} \\
  &&& t.* & \textrm{all columns from a table} \\
  &&& \hat{E} \; [\mt{AS} \; f] & \textrm{expression column} \\
  \textrm{Table names} & t &::=& x & \textrm{constant table name (automatically capitalized)} \\
  &&& X & \textrm{constant table name} \\
  &&& \{\{c\}\} & \textrm{computed table name (of kind $\mt{Name}$)} \\
  \textrm{Column names} & f &::=& X & \textrm{constant column name} \\
  &&& \{c\} & \textrm{computed column name (of kind $\mt{Name}$)} \\
  \textrm{Tables} & T &::=& x & \textrm{table variable, named locally by its own capitalization} \\
  &&& x \; \mt{AS} \; X & \textrm{table variable, with local name} \\
  &&& x \; \mt{AS} \; \{c\} & \textrm{table variable, with computed local name} \\
  &&& \{\{e\}\} \; \mt{AS} \; t & \textrm{computed table expression, with local name} \\
  &&& \{\{e\}\} \; \mt{AS} \; \{c\} & \textrm{computed table expression, with computed local name} \\
  \textrm{$\mt{FROM}$ items} & F &::=& T \mid \{\{e\}\} \mid F \; J \; \mt{JOIN} \; F \; \mt{ON} \; E \\
  &&& \mid F \; \mt{CROSS} \; \mt{JOIN} \ F \\
  &&& \mid (Q) \; \mt{AS} \; t \\
  \textrm{Joins} & J &::=& [\mt{INNER}] \\
  &&& \mid [\mt{LEFT} \mid \mt{RIGHT} \mid \mt{FULL}] \; [\mt{OUTER}] \\
  \textrm{SQL expressions} & E &::=& t.f & \textrm{column references} \\
  &&& X & \textrm{named expression references} \\
  &&& \{[e]\} & \textrm{injected native Ur expressions} \\
  &&& \{e\} & \textrm{computed expressions, probably using $\mt{sql\_exp}$ directly} \\
  &&& \mt{TRUE} \mid \mt{FALSE} & \textrm{boolean constants} \\
  &&& \ell & \textrm{primitive type literals} \\
  &&& \mt{NULL} & \textrm{null value (injection of $\mt{None}$)} \\
  &&& E \; \mt{IS} \; \mt{NULL} & \textrm{nullness test} \\
  &&& \mt{COALESCE}(E, E) & \textrm{take first non-null value} \\
  &&& n & \textrm{nullary operators} \\
  &&& u \; E & \textrm{unary operators} \\
  &&& E \; b \; E & \textrm{binary operators} \\
  &&& \mt{COUNT}(\ast) & \textrm{count number of rows} \\
  &&& a(E) & \textrm{other aggregate function} \\
  &&& \mt{IF} \; E \; \mt{THEN} \; E \; \mt{ELSE} \; E & \textrm{conditional} \\
  &&& (Q) & \textrm{subquery (must return a single expression column)} \\
  &&& (E) & \textrm{explicit precedence} \\
  \textrm{Nullary operators} & n &::=& \mt{CURRENT\_TIMESTAMP} \\
  \textrm{Unary operators} & u &::=& \mt{NOT} \\
  \textrm{Binary operators} & b &::=& \mt{AND} \mid \mt{OR} \mid = \mid \neq \mid < \mid \leq \mid > \mid \geq \\
  \textrm{Aggregate functions} & a &::=& \mt{COUNT} \mid \mt{AVG} \mid \mt{SUM} \mid \mt{MIN} \mid \mt{MAX} \\
  \textrm{Directions} & o &::=& \mt{ASC} \mid \mt{DESC} \mid \{e\} \\
  \textrm{SQL integer} & N &::=& n \mid \{e\} \\
  \textrm{Windowable expressions} & \hat{E} &::=& E \\
  &&& w \; [\mt{OVER} \; ( & \textrm{(Postgres only)} \\
  &&& \hspace{.1in} [\mt{PARTITION} \; \mt{BY} \; E] \\
  &&& \hspace{.1in} [\mt{ORDER} \; \mt{BY} \; O])] \\
  \textrm{Window function} & w &::=& \mt{RANK}() \\
  &&& \mt{COUNT}(*) \\
  &&& a(E)
\end{array}$$

Additionally, an SQL expression may be inserted into normal Ur code with the syntax $(\mt{SQL} \; E)$ or $(\mt{WHERE} \; E)$.  Similar shorthands exist for other nonterminals, with the prefix $\mt{FROM}$ for $\mt{FROM}$ items and $\mt{SELECT1}$ for pre-queries.

Unnamed expression columns in $\mt{SELECT}$ clauses are assigned consecutive natural numbers, starting with 1.  Any expression in a $p$ position that is enclosed in parentheses is treated as an expression column, rather than a column pulled directly out of a table, even if it is only a field projection.  (This distinction affects the record type used to describe query results.)

\subsubsection{DML}

DML commands $D$ are added to the rules for expressions $e$.

$$\begin{array}{rrcll}
  \textrm{Commands} & D &::=& (\mt{INSERT} \; \mt{INTO} \; T^E \; (f,^+) \; \mt{VALUES} \; (E,^+)) \\
  &&& (\mt{UPDATE} \; T^E \; \mt{SET} \; (f = E,)^+ \; \mt{WHERE} \; E) \\
  &&& (\mt{DELETE} \; \mt{FROM} \; T^E \; \mt{WHERE} \; E) \\
  \textrm{Table expressions} & T^E &::=& x \mid \{\{e\}\}
\end{array}$$

Inside $\mt{UPDATE}$ and $\mt{DELETE}$ commands, lone variables $X$ are interpreted as references to columns of the implicit table $\mt{T}$, rather than to named expressions.

\subsection{XML}

XML fragments $L$ are added to the rules for expressions $e$.

$$\begin{array}{rrcll}
  \textrm{XML fragments} & L &::=& \texttt{<xml/>} \mid \texttt{<xml>}l^*\texttt{</xml>} \\
  \textrm{XML pieces} & l &::=& \textrm{text} & \textrm{cdata} \\
  &&& \texttt{<}g\texttt{/>} & \textrm{tag with no children} \\
  &&& \texttt{<}g\texttt{>}l^*\texttt{</}x\texttt{>} & \textrm{tag with children} \\
  &&& \{e\} & \textrm{computed XML fragment} \\
  &&& \{[e]\} & \textrm{injection of an Ur expression, via the $\mt{Top}.\mt{txt}$ function} \\
  \textrm{Tag} & g &::=& h \; (x = v)^* \\
  \textrm{Tag head} & h &::=& x & \textrm{tag name} \\
  &&& h\{c\} & \textrm{constructor parameter} \\
  \textrm{Attribute value} & v &::=& \ell & \textrm{literal value} \\
  &&& \{e\} & \textrm{computed value} \\
\end{array}$$

Further, there is a special convenience and compatibility form for setting CSS classes of tags.  If a \cd{class} attribute has a value that is a string literal, the literal is parsed in the usual HTML way and replaced with calls to appropriate Ur/Web combinators.  Any dashes in the text are replaced with underscores to determine Ur identifiers.  The same desugaring can be accessed in a normal expression context by calling the pseudo-function \cd{CLASS} on a string literal.

Similar support is provided for \cd{style} attributes.  Normal CSS syntax may be used in string literals that are \cd{style} attribute values, and the desugaring may be accessed elsewhere with the pseudo-function \cd{STYLE}.

\section{\label{structure}The Structure of Web Applications}

A web application is built from a series of modules, with one module, the last one appearing in the \texttt{.urp} file, designated as the main module.  The signature of the main module determines the URL entry points to the application.  Such an entry point should have type $\mt{t1} \to \ldots \to \mt{tn} \to \mt{transaction} \; \mt{page}$, for any integer $n \geq 0$, where $\mt{page}$ is a type synonym for top-level HTML pages, defined in $\mt{Basis}$.  If such a function is at the top level of main module $M$, with $n = 0$, it will be accessible at URI \texttt{/M/f}, and so on for more deeply nested functions, as described in Section \ref{tag} below.  See Section \ref{cl} for information on the \texttt{prefix} and \texttt{rewrite url} directives, which can be used to rewrite the default URIs of different entry point functions.  The final URL of a function is its default module-based URI, with \texttt{rewrite url} rules applied, and with the \texttt{prefix} prepended.  Arguments to an entry-point function are deserialized from the part of the URI following \texttt{f}.

Elements of modules beside the main module, including page handlers, will only be included in the final application if they are transitive dependencies of the handlers in the main module.

Normal links are accessible via HTTP \texttt{GET}, which the relevant standard says should never cause side effects.  To export a page which may cause side effects, accessible only via HTTP \texttt{POST}, include one argument of the page handler of type $\mt{Basis.postBody}$.  When the handler is called, this argument will receive a value that can be deconstructed into a MIME type (with $\mt{Basis.postType}$) and payload (with $\mt{Basis.postData}$).  This kind of handler should not be used with forms that exist solely within Ur/Web apps; for these, use Ur/Web's built-in support, as described below.  It may still be useful to use $\mt{Basis.postBody}$ with form requests submitted by code outside an Ur/Web app.  For such cases, the function $\mt{Top.postFields} : \mt{postBody} \to \mt{list} \; (\mt{string} \times \mt{string})$ may be useful, breaking a \texttt{POST} body of type \texttt{application/x-www-form-urlencoded} into its name-value pairs.

Any normal page handler may also include arguments of type $\mt{option \; Basis.queryString}$, which will be handled specially.  Rather than being deserialized from the current URI, such an argument is passed the whole query string that the handler received.  The string may be analyzed by calling $\mt{Basis.show}$ on it.  A handler of this kind may be passed as an argument to $\mt{Basis.effectfulUrl}$ to generate a URL to a page that may be used as a ``callback'' by an external service, such that the handler is allowed to cause side effects.

When the standalone web server receives a request for a known page, it calls the function for that page, ``running'' the resulting transaction to produce the page to return to the client.  Pages link to other pages with the \texttt{link} attribute of the \texttt{a} HTML tag.  A link has type $\mt{transaction} \; \mt{page}$, and the semantics of a link are that this transaction should be run to compute the result page, when the link is followed.  Link targets are assigned URL names in the same way as top-level entry points.

HTML forms are handled in a similar way.  The $\mt{action}$ attribute of a $\mt{submit}$ form tag takes a value of type $\$\mt{use} \to \mt{transaction} \; \mt{page}$, where $\mt{use}$ is a kind-$\{\mt{Type}\}$ record of the form fields used by this action handler.  Action handlers are assigned URL patterns in the same way as above.

For both links and actions, direct arguments and local variables mentioned implicitly via closures are automatically included in serialized form in URLs, in the order in which they appear in the source code.  Such serialized values may only be drawn from a limited set of types, and programs will fail to compile when the (implicit or explicit) arguments of page handler functions involve disallowed types.  (Keep in mind that every free variable of a function is an implicit argument if it was not defined at the top level of a module.)  For instance:
\begin{itemize}
  \item Functions are disallowed, since there is no obvious way to serialize them safely.
  \item XML fragments are disallowed, since it is unclear how to check client-provided XML to be sure it doesn't break the HTML invariants of the application (for instance, by mutating the DOM in the conventional way, interfering with Ur/Web's functional-reactive regime).
  \item Blobs (``files'') are disallowed, since they can easily have very large serializations that could not fit within most web servers' URL size limits.  (And you probably don't want to be serializing, e.g., image files in URLs, anyway.)
\end{itemize}

Ur/Web programs generally mix server- and client-side code in a fairly transparent way.  The one important restriction is that mixed client-server code must encapsulate all server-side pieces within named functions.  This is because execution of such pieces will be implemented by explicit calls to the remote web server, and it is useful to get the programmer's help in designing the interface to be used.  For example, this makes it easier to allow a client running an old version of an application to continue interacting with a server that has been upgraded to a new version, if the programmer took care to keep the interfaces of all of the old remote calls the same.  The functions implementing these services are assigned names in the same way as normal web entry points, by using module structure.

\medskip

The HTTP standard suggests that GET requests only be used in ways that generate no side effects.  Side effecting operations should use POST requests instead.  The Ur/Web compiler enforces this rule strictly, via a simple conservative program analysis.  Any page that may have a side effect must be accessed through a form, all of which use POST requests, or via a direct call to a page handler with some argument of type $\mt{Basis.postBody}$.  A page is judged to have a side effect if its code depends syntactically on any of the side-effecting, server-side FFI functions.  Links, forms, and most client-side event handlers are not followed during this syntactic traversal, but \texttt{<body onload=\{...\}>} handlers \emph{are} examined, since they run right away and could just as well be considered parts of main page handlers.

Ur/Web includes a kind of automatic protection against cross site request forgery attacks.  Whenever any page execution can have side effects and can also read at least one cookie value, all cookie values must be signed cryptographically, to ensure that the user has come to the current page by submitting a form on a real page generated by the proper server.  Signing and signature checking are inserted automatically by the compiler.  This prevents attacks like phishing schemes where users are directed to counterfeit pages with forms that submit to your application, where a user's cookies might be submitted without his knowledge, causing some undesired side effect.

\subsection{Tasks}

In many web applications, it's useful to run code at points other than requests from browsers.  Ur/Web's \emph{task} mechanism facilitates this.  A type family of \emph{task kinds} is in the standard library:

$$\begin{array}{l}
\mt{con} \; \mt{task\_kind} :: \mt{Type} \to \mt{Type} \\
\mt{val} \; \mt{initialize} : \mt{task\_kind} \; \mt{unit} \\
\mt{val} \; \mt{clientLeaves} : \mt{task\_kind} \; \mt{client} \\
\mt{val} \; \mt{periodic} : \mt{int} \to \mt{task\_kind} \; \mt{unit}
\end{array}$$

A task kind names a particular extension point of generated applications, where the type parameter of a task kind describes which extra input data is available at that extension point.  Add task code with the special declaration form $\mt{task} \; e_1 = e_2$, where $e_1$ is a task kind with data $\tau$, and $e_2$ is a function from $\tau$ to $\mt{transaction} \; \mt{unit}$.

The currently supported task kinds are:
\begin{itemize}
\item $\mt{initialize}$: Code that is run when the application starts up.
\item $\mt{clientLeaves}$: Code that is run for each client that the runtime system decides has surfed away.  When a request that generates a new client handle is aborted, that handle will still eventually be passed to $\mt{clientLeaves}$ task code, even though the corresponding browser was never informed of the client handle's existence.  In other words, in general, $\mt{clientLeaves}$ handlers will be called more times than there are actual clients.
\item $\mt{periodic} \; n$: Code that is run when the application starts up and then every $n$ seconds thereafter.
\end{itemize}


\section{The Foreign Function Interface}

It is possible to call your own C and JavaScript code from Ur/Web applications, via the foreign function interface (FFI).  The starting point for a new binding is a \texttt{.urs} signature file that presents your external library as a single Ur/Web module (with no nested modules).  Compilation conventions map the types and values that you use into C and/or JavaScript types and values.

It is most convenient to encapsulate an FFI binding with a new \texttt{.urp} file, which applications can include with the \texttt{library} directive in their own \texttt{.urp} files.  A number of directives are likely to show up in the library's project file.

\begin{itemize}
\item \texttt{clientOnly Module.ident} registers a value as being allowed only in client-side code.
\item \texttt{clientToServer Module.ident} declares a type as OK to marshal between clients and servers.  By default, abstract FFI types are not allowed to be marshalled, since your library might be maintaining invariants that the simple serialization code doesn't check.
\item \texttt{effectful Module.ident} registers a function that can have side effects.  It is important to remember to use this directive for each such function, or else the optimizer might change program semantics.  (Note that merely assigning a function a \texttt{transaction}-based type does not mark it as effectful in this way!)
\item \texttt{ffi FILE.urs} names the file giving your library's signature.  You can include multiple such files in a single \texttt{.urp} file, and each file \texttt{mod.urp} defines an FFI module \texttt{Mod}.
\item \texttt{include FILE} requests inclusion of a C header file.
\item \texttt{jsFunc Module.ident=name} gives a mapping from an Ur name for a value to a JavaScript name.
\item \texttt{link FILE} requests that \texttt{FILE} be linked into applications.  It should be a C object or library archive file, and you are responsible for generating it with your own build process.
\item \texttt{script URL} requests inclusion of a JavaScript source file within application HTML.
\item \texttt{serverOnly Module.ident} registers a value as being allowed only in server-side code.
\end{itemize}

\subsection{Writing C FFI Code}

A server-side FFI type or value \texttt{Module.ident} must have a corresponding type or value definition \texttt{uw\_Module\_ident} in C code.  With the current Ur/Web version, it's not generally possible to work with Ur records or complex datatypes in C code, but most other kinds of types are fair game.

\begin{itemize}
  \item Primitive types defined in \texttt{Basis} are themselves using the standard FFI interface, so you may refer to them like \texttt{uw\_Basis\_t}.  See \texttt{include/types.h} for their definitions.
  \item Enumeration datatypes, which have only constructors that take no arguments, should be defined using C \texttt{enum}s.  The type is named as for any other type identifier, and each constructor \texttt{c} gets an enumeration constant named \texttt{uw\_Module\_c}.
  \item A datatype \texttt{dt} (such as \texttt{Basis.option}) that has one non-value-carrying constructor \texttt{NC} and one value-carrying constructor \texttt{C} gets special treatment.  Where \texttt{T} is the type of \texttt{C}'s argument, and where we represent \texttt{T} as \texttt{t} in C, we represent \texttt{NC} with \texttt{NULL}.  The representation of \texttt{C} depends on whether we're sure that we don't need to use \texttt{NULL} to represent \texttt{t} values; this condition holds only for strings and complex datatypes.  For such types, \texttt{C v} is represented with the C encoding of \texttt{v}, such that the translation of \texttt{dt} is \texttt{t}.  For other types, \texttt{C v} is represented with a pointer to the C encoding of v, such that the translation of \texttt{dt} is \texttt{t*}.
  \item Ur/Web involves many types of program syntax, such as for HTML and SQL code.  All of these types are implemented with normal C strings, and you may take advantage of that encoding to manipulate code as strings in C FFI code.  Be mindful that, in writing such code, it is your responsibility to maintain the appropriate code invariants, or you may reintroduce the code injection vulnerabilities that Ur/Web rules out.  The most convenient way to extend Ur/Web with functions that, e.g., use natively unsupported HTML tags is to generate the HTML code with the FFI.
\end{itemize}

The C FFI version of a Ur function with type \texttt{T1 -> ... -> TN -> R} or \texttt{T1 -> ... -> TN -> transaction R} has a C prototype like \texttt{R uw\_Module\_ident(uw\_context, T1, ..., TN)}.  Only functions with types of the second form may have side effects.  \texttt{uw\_context} is the type of state that persists across handling a client request.  Many functions that operate on contexts are prototyped in \texttt{include/urweb.h}.  Most should only be used internally by the compiler.  A few are useful in general FFI implementation:
\begin{itemize}
  \item \begin{verbatim}
void uw_error(uw_context, failure_kind, const char *fmt, ...);
  \end{verbatim}
  Abort the current request processing, giving a \texttt{printf}-style format string and arguments for generating an error message.  The \texttt{failure\_kind} argument can be \texttt{FATAL}, to abort the whole execution; \texttt{BOUNDED\_RETRY}, to try processing the request again from the beginning, but failing if this happens too many times; or \texttt{UNLIMITED\_RETRY}, to repeat processing, with no cap on how many times this can recur.

  All pointers to the context-local heap (see description below of \texttt{uw\_malloc()}) become invalid at the start and end of any execution of a main entry point function of an application.  For example, if the request handler is restarted because of a \texttt{uw\_error()} call with \texttt{BOUNDED\_RETRY} or for any other reason, it is unsafe to access any local heap pointers that may have been stashed somewhere beforehand.

  \item \begin{verbatim}
void uw_set_error_message(uw_context, const char *fmt, ...);
  \end{verbatim}
  This simpler form of \texttt{uw\_error()} saves an error message without immediately aborting execution.

  \item \begin{verbatim}
void uw_push_cleanup(uw_context, void (*func)(void *), void *arg);
void uw_pop_cleanup(uw_context);
  \end{verbatim}
  Manipulate a stack of actions that should be taken if any kind of error condition arises.  Calling the ``pop'' function both removes an action from the stack and executes it.  It is a bug to let a page request handler finish successfully with unpopped cleanup actions.

  Pending cleanup actions aren't intended to have any complex relationship amongst themselves, so, upon request handler abort, pending actions are executed in first-in-first-out order.

  \item \begin{verbatim}
void *uw_malloc(uw_context, size_t);
  \end{verbatim}
  A version of \texttt{malloc()} that allocates memory inside a context's heap, which is managed with region allocation.  Thus, there is no \texttt{uw\_free()}, but you need to be careful not to keep ad-hoc C pointers to this area of memory.  In general, \texttt{uw\_malloc()}ed memory should only be used in ways compatible with the computation model of pure Ur.  This means it is fine to allocate and return a value that could just as well have been built with core Ur code.  In contrast, it is almost never safe to store \texttt{uw\_malloc()}ed pointers in global variables, including when the storage happens implicitly by registering a callback that would take the pointer as an argument.

  For performance and correctness reasons, it is usually preferable to use \texttt{uw\_malloc()} instead of \texttt{malloc()}.  The former manipulates a local heap that can be kept allocated across page requests, while the latter uses global data structures that may face contention during concurrent execution.  However, we emphasize again that \texttt{uw\_malloc()} should never be used to implement some logic that couldn't be implemented trivially by a constant-valued expression in Ur.

  \item \begin{verbatim}
typedef void (*uw_callback)(void *);
typedef void (*uw_callback_with_retry)(void *, int will_retry);
void uw_register_transactional(uw_context, void *data, uw_callback commit,
                               uw_callback rollback, uw_callback_with_retry free);
  \end{verbatim}
  All side effects in Ur/Web programs need to be compatible with transactions, such that any set of actions can be undone at any time.  Thus, you should not perform actions with non-local side effects directly; instead, register handlers to be called when the current transaction is committed or rolled back.  The arguments here give an arbitary piece of data to be passed to callbacks, a function to call on commit, a function to call on rollback, and a function to call afterward in either case to clean up any allocated resources.  A rollback handler may be called after the associated commit handler has already been called, if some later part of the commit process fails.  A free handler is told whether the runtime system expects to retry the current page request after rollback finishes.

  Any of the callbacks may be \texttt{NULL}.  To accommodate some stubbornly non-transactional real-world actions like sending an e-mail message, Ur/Web treats \texttt{NULL} \texttt{rollback} callbacks specially.  When a transaction commits, all \texttt{commit} actions that have non-\texttt{NULL} rollback actions are tried before any \texttt{commit} actions that have \texttt{NULL} rollback actions.  Thus, if a single execution uses only one non-transactional action, and if that action never fails partway through its execution while still causing an observable side effect, then Ur/Web can maintain the transactional abstraction.

  When a request handler ends with multiple pending transactional actions, their handlers are run in a first-in-last-out stack-like order, wherever the order would otherwise be ambiguous.

  It is not safe for any of these handlers to access a context-local heap through a pointer returned previously by \texttt{uw\_malloc()}, nor should any new calls to that function be made.  Think of the context-local heap as meant for use by the Ur/Web code itself, while transactional handlers execute after the Ur/Web code has finished.

  A handler may signal an error by calling \texttt{uw\_set\_error\_message()}, but it is not safe to call \texttt{uw\_error()} from a handler.  Signaling an error in a commit handler will cause the runtime system to switch to aborting the transaction, immediately after the current commit handler returns.

  \item \begin{verbatim}
void *uw_get_global(uw_context, char *name);
void uw_set_global(uw_context, char *name, void *data, uw_callback free);
  \end{verbatim}
  Different FFI-based extensions may want to associate their own pieces of data with contexts.  The global interface provides a way of doing that, where each extension must come up with its own unique key.  The \texttt{free} argument to \texttt{uw\_set\_global()} explains how to deallocate the saved data.  It is never safe to store \texttt{uw\_malloc()}ed pointers in global variable slots.

\end{itemize}

\subsection{Writing JavaScript FFI Code}

JavaScript is dynamically typed, so Ur/Web type definitions imply no JavaScript code.  The JavaScript identifier for each FFI function is set with the \texttt{jsFunc} directive.  Each identifier can be defined in any JavaScript file that you ask to include with the \texttt{script} directive.

In contrast to C FFI code, JavaScript FFI functions take no extra context argument.  Their argument lists are as you would expect from their Ur types.  Only functions whose ranges take the form \texttt{transaction T} should have side effects; the JavaScript ``return type'' of such a function is \texttt{T}.  Here are the conventions for representing Ur values in JavaScript.

\begin{itemize}
\item Integers, floats, strings, characters, and booleans are represented in the usual JavaScript way.
\item Ur functions are represented in an unspecified way.  This means that you should not rely on any details of function representation.  Named FFI functions are represented as JavaScript functions with as many arguments as their Ur types specify.  To call a non-FFI function \texttt{f} on argument \texttt{x}, run \texttt{execF(f, x)}.  To lift a normal JavaScript function \cd{f} into an Ur/Web JavaScript function, run \cd{flift(f)}.
\item An Ur record is represented with a JavaScript record, where Ur field name \texttt{N} translates to JavaScript field name \texttt{\_N}.  An exception to this rule is that the empty record is encoded as \texttt{null}.
\item \texttt{option}-like types receive special handling similar to their handling in C.  The ``\texttt{None}'' constructor is \texttt{null}, and a use of the ``\texttt{Some}'' constructor on a value \texttt{v} is either \texttt{v}, if the underlying type doesn't need to use \texttt{null}; or \texttt{\{v:v\}} otherwise.
\item Any other datatypes represent a non-value-carrying constructor \texttt{C} as \texttt{"C"} and an application of a constructor \texttt{C} to value \texttt{v} as \texttt{\{n:"C", v:v\}}.  This rule only applies to datatypes defined in FFI module signatures; the compiler is free to optimize the representations of other, non-\texttt{option}-like datatypes in arbitrary ways.
\item As in the C FFI, all abstract types of program syntax are implemented with strings in JavaScript.
\end{itemize}

It is possible to write JavaScript FFI code that interacts with the functional-reactive structure of a document.  Here is a quick summary of some of the simpler functions to use; descriptions of fancier stuff may be added later on request (and such stuff should be considered ``undocumented features'' until then).

\begin{itemize}
\item Sources should be treated as an abstract type, manipulated via:
  \begin{itemize}
  \item \cd{sc(v)}, to create a source initialized to \cd{v}
  \item \cd{sg(s)}, to retrieve the current value of source \cd{s}
  \item \cd{sv(s, v)}, to set source \cd{s} to value \cd{v}
  \end{itemize}

\item Signals should be treated as an abstract type, manipulated via:
  \begin{itemize}
  \item \cd{sr(v)} and \cd{sb(s, f)}, the ``return'' and ``bind'' monad operators, respectively
  \item \cd{ss(s)}, to produce the signal corresponding to source \cd{s}
  \item \cd{scur(s)}, to get the current value of signal \cd{s}
  \end{itemize}

\item The behavior of the \cd{<dyn>} pseudo-tag may be mimicked by following the right convention in a piece of HTML source code with a type like $\mt{xbody}$.  Such a piece of source code may be encoded with a JavaScript string.  To insert a dynamic section, include a \cd{<script>} tag whose content is just a call \cd{dyn(pnode, s)}.  The argument \cd{pnode} specifies what the relevant enclosing parent tag is.  Use value \cd{"tr"} when the immediate parent is \cd{<tr>}, use \cd{"table"} when the immediate parent is \cd{<table>}, and use \cd{"span"} otherwise.  The argument \cd{s} is a string-valued signal giving the HTML code to be inserted at this point.  As with the usual \cd{<dyn>} tag, that HTML subtree is automatically updated as the value of \cd{s} changes.

\item There is only one supported method of taking HTML values generated in Ur/Web code and adding them to the DOM in FFI JavaScript code: call \cd{setInnerHTML(node, html)} to add HTML content \cd{html} within DOM node \cd{node}.  Merely running \cd{node.innerHTML = html} is not guaranteed to get the job done, though programmers familiar with JavaScript will probably find it useful to think of \cd{setInnerHTML} as having this effect.  The unusual idiom is required because Ur/Web uses a nonstandard representation of HTML, to support infinite nesting of code that may generate code that may generate code that....  The \cd{node} value must already be in the DOM tree at the point when \cd{setInnerHTML} is called, because some plumbing must be set up to interact sensibly with \cd{<dyn>} tags. 

\item It is possible to use the more standard ``IDs and mutation'' style of JavaScript coding, though that style is unidiomatic for Ur/Web and should be avoided wherever possible.  Recall the abstract type $\mt{id}$ and its constructor $\mt{fresh}$, which can be used to generate new unique IDs in Ur/Web code.  Values of this type are represented as strings in JavaScript, and a function \cd{fresh()} is available to generate new unique IDs.  Application-specific ID generation schemes may cause bad interactions with Ur/Web code that also generates IDs, so the recommended approach is to produce IDs only via calls to \cd{fresh()}.  FFI code shouldn't depend on the ID generation scheme (on either server side or client side), but it is safe to include these IDs in tag attributes (in either server-side or client-side code) and manipulate the associated DOM nodes in the standard way (in client-side code).  Be forewarned that this kind of imperative DOM manipulation may confuse the Ur/Web runtime system and interfere with proper behavior of tags like \cd{<dyn>}!
\end{itemize}

\subsection{Introducing New HTML Tags}

FFI modules may introduce new tags as values with $\mt{Basis.tag}$ types.  See \texttt{basis.urs} for examples of how tags are declared.  The identifier of a tag value is used as its rendering in HTML.  The Ur/Web syntax sugar for XML literals desugars each use of a tag into a reference to an identifier with the same name.  There is no need to provide implementations (i.e., in C or JavaScript code) for such identifiers.

The onus is on the coder of a new tag's interface to think about consequences for code injection attacks, messing with the DOM in ways that may break Ur/Web reactive programming, etc.


\section{Compiler Phases}

The Ur/Web compiler is unconventional in that it relies on a kind of \emph{heuristic compilation}.  Not all valid programs will compile successfully.  Informally, programs fail to compile when they are ``too higher order.''  Compiler phases do their best to eliminate different kinds of higher order-ness, but some programs just won't compile.  This is a trade-off for producing very efficient executables.  Compiled Ur/Web programs use native C representations and require no garbage collection.

In this section, we step through the main phases of compilation, noting what consequences each phase has for effective programming.

\subsection{Parse}

The compiler reads a \texttt{.urp} file, figures out which \texttt{.urs} and \texttt{.ur} files it references, and combines them all into what is conceptually a single sequence of declarations in the core language of Section \ref{core}.

\subsection{Elaborate}

This is where type inference takes place, translating programs into an explicit form with no more wildcards.  This phase is the most likely source of compiler error messages.

Those crawling through the compiler source will also want to be aware of another compiler phase, Explify, that occurs immediately afterward.  This phase just translates from an AST language that includes unification variables to a very similar language that doesn't; all variables should have been determined by the end of Elaborate, anyway.  The new AST language also drops some features that are used only for static checking and that have no influence on runtime behavior, like disjointness constraints.

\subsection{Unnest}

Named local function definitions are moved to the top level, to avoid the need to generate closures.

\subsection{Corify}

Module system features are compiled away, through inlining of functor definitions at application sites.  Afterward, most abstraction boundaries are broken, facilitating optimization.

\subsection{Especialize}

Functions are specialized to particular argument patterns.  This is an important trick for avoiding the need to maintain any closures at runtime.  Currently, specialization only happens for prefixes of a function's full list of parameters, so you may need to take care to put arguments of function types before other arguments.  The optimizer will not be effective enough if you use arguments that mix functions and values that must be calculated at run-time.  For instance, a tuple of a function and an integer counter would not lead to successful code generation; these should be split into separate arguments via currying.

\subsection{Untangle}

Remove unnecessary mutual recursion, splitting recursive groups into strongly connected components.

\subsection{Shake}

Remove all definitions not needed to run the page handlers that are visible in the signature of the last module listed in the \texttt{.urp} file.

\subsection{Rpcify}

Pieces of code are determined to be client-side, server-side, neither, or both, by figuring out which standard library functions might be needed to execute them.  Calls to server-side functions (e.g., $\mt{query}$) within mixed client-server code are identified and replaced with explicit remote calls.  Some mixed functions may be converted to continuation-passing style to facilitate this transformation.

\subsection{Untangle, Shake}

Repeat these simplifications.

\subsection{\label{tag}Tag}

Assign a URL name to each link and form action.  It is important that these links and actions are written as applications of named functions, because such names are used to generate URL patterns.  A URL pattern has a name built from the full module path of the named function, followed by the function name, with all pieces separated by slashes.  The path of a functor application is based on the name given to the result, rather than the path of the functor itself.

\subsection{Reduce}

Apply definitional equality rules to simplify the program as much as possible.  This effectively includes inlining of every non-recursive definition.

\subsection{Unpoly}

This phase specializes polymorphic functions to the specific arguments passed to them in the program.  If the program contains real polymorphic recursion, Unpoly will be insufficient to avoid later error messages about too much polymorphism.

\subsection{Specialize}

Replace uses of parameterized datatypes with versions specialized to specific parameters.  As for Unpoly, this phase will not be effective enough in the presence of polymorphic recursion or other fancy uses of impredicative polymorphism.

\subsection{Shake}

Here the compiler repeats the earlier Shake phase.

\subsection{Monoize}

Programs are translated to a new intermediate language without polymorphism or non-$\mt{Type}$ constructors.  Error messages may pop up here if earlier phases failed to remove such features.

This is the stage at which concrete names are generated for cookies, tables, and sequences.  They are named following the same convention as for links and actions, based on module path information saved from earlier stages.  Table and sequence names separate path elements with underscores instead of slashes, and they are prefixed by \texttt{uw\_}.

\subsection{MonoOpt}

Simple algebraic laws are applied to simplify the program, focusing especially on efficient imperative generation of HTML pages.

\subsection{MonoUntangle}

Unnecessary mutual recursion is broken up again.

\subsection{MonoReduce}

Equivalents of the definitional equality rules are applied to simplify programs, with inlining again playing a major role.

\subsection{MonoShake, MonoOpt}

Unneeded declarations are removed, and basic optimizations are repeated.

\subsection{Fuse}

The compiler tries to simplify calls to recursive functions whose results are immediately written as page output.  The write action is pushed inside the function definitions to avoid allocation of intermediate results.

\subsection{MonoUntangle, MonoShake}

Fuse often creates more opportunities to remove spurious mutual recursion.

\subsection{Pathcheck}

The compiler checks that no link or action name has been used more than once.

\subsection{Cjrize}

The program is translated to what is more or less a subset of C.  If any use of functions as data remains at this point, the compiler will complain.

\subsection{C Compilation and Linking}

The output of the last phase is pretty-printed as C source code and passed to the C compiler.


\end{document}