|(require shell/pipeline)||package: shell-pipeline|
This library is not entirely stable.
Forthcoming features include features such as process redirection (similar to <() and >() in Bash).
Some specific things that may change are the names of keyword arguments to run-pipeline, and the type of arguments and exact semantics of the redirection options for pipelines. Also, the extra run-pipline/ functions.
This library makes unix-style pipelines of external programs and racket functions easy. You can write things as simply as (run-pipeline '(cat /etc/passwd) '(grep root) '(cut -d : -f 1)), which will print "root\n" to stdout (on unix systems) and will return a pipeline object. To get the output as a string, use run-pipeline/out the same way. You can also put racket functions in the pipeline. If you have a racket implementation of grep called my-grep, you can do (run-pipeline '(cat /etc/passwd) `(,my-grep root) '(cut -d : -f 1)) to get the same results. So you can write all sorts of filter functions in Racket rather than using shell commands.
Symbols in pipelines are turned into strings before they are passed in as arguments to subprocesses. Arguments to racket functions are not transformed in any way.
This library DOES work on MS Windows, and if it can’t find a program it retries the name with a .exe at the end. But Microsoft doesn’t seem to believe in having useful shell utilities, or in putting program executables on the PATH, or adding program locations to the PATH. So it will probably still be more useful on Unix than on Windows.
(run-pipeline member ... [ #:in in #:out out #:err err #:strictness strictness #:lazy-timeout lazy-timeout #:background? bg?]) → any/c member : (or/c list? pipeline-member-spec?) in : (or/c input-port? false/c) = (current-input-port)
(or/c port? false/c path-string-symbol? (list/c path-string-symbol? (or/c 'append 'truncate 'error))) = (current-output-port)
(or/c port? false/c path-string-symbol? (list/c path-string-symbol? (or/c 'append 'truncate 'error))) = (current-error-port) strictness : (or/c 'strict 'lazy 'permissive) = 'lazy lazy-timeout : real? = 1 bg? : any/c = #f
Each member of the pipeline will have its current-output-port connected to the current-input-port of the next member. The first and last members use in and out, respectively, to communicate with the outside world.
All ports specified (in, out, err) may be either a port, the symbol 'null, #f, or a path/string/symbol. The error port may be 'stdout, in which case the output port will be used. If #f is given, then a port will be returned in the pipeline struct returned (similar to subprocess). If 'null is given a null output port or empty string port is used. If a path/string/symbol is given, then a file at that path is opened.
Beware that just as with subprocess, if you pass #f to get an input, output, or error port out of a pipeline, the resulting port may be a file-stream-port, and you will need to be sure to close it. Otherwise all file-stream-port handling in the pipeline and for file redirection is done automatically.
strictness determines how success is reported. If strictness is 'strict, then the pipeline is successful when all members are successful, and if there are errors the first member to have an error is reported. If strictness is 'lazy, success is similar, but treats any members that were killed as successful. If strictness is 'permissive, then errors are ignored except for the last pipeline member, which is what bash and most other shell languages do.
Also, if strictness is 'lazy or 'permissive, then when a pipeline member finishes, pipeline members before it may be killed. In permissive mode they may be killed immediately, and in lazy mode they have lazy-timeout seconds to finish before they are killed. This process killing happens to not wait for long (potentially infinitely so) processes in the middle of a pipeline when only a small part of their output is used. For instance, piping the output of a large file (or cat-ing an infinite pseudo-file) to the "head" command. This mirrors what bash and other shells do.
If background? is false, then run-pipeline uses pipeline-wait to wait until it finishes, then returns the status with pipeline-status. If background? is not false, then run-pipeline returns a pipeline object.
(run-pipeline/out member ... [ #:in in #:status-and? status-and?]) → any/c member : (or/c list? pipeline-member-spec?)
in : (or/c input-port? false/c path-string-symbol?) = (open-input-string "") status-and? : any/c = #f
(pipeline-member-spec argl [ #:err err #:success success-pred]) → pipeline-member-spec? argl : any/c
(or/c port? false/c path-string-symbol? (list/c path-string-symbol? (or/c 'append 'truncate 'error))) = hidden-default-value
success-pred : (or/c false/c procedure? (listof any/c)) = hidden-default-value
err and success-pred default to values that can be overridden by the defaults set by the pipeline-running functions. But in the end they default to current-error-port and #f.
Also, pipelines are synchronizable.
Takes a procedure which takes a string as its first argument and returns a string. Returns a procedure which will turn its current-input-port into a string and pass it to the original procedure as its first argument. It then displays the output string of the function to its current-output-port.
If you want an alias in Rash, this is probably not what you want. See shell/pipeline-macro/define-pipeline-alias.
This will likely be renamed to be less confusing.
;; A simple case -- have an alias that sets initial arguments. (define ls-alias (alias-func (λ args (list* 'ls '--color=auto args)))) ;; Slightly more complicated: `find` requires that its path argument go before ;; its modifier flags. (define find-files-alias (alias-func (λ args `(find ,@args -type f))))
(and/success e ...)
(or/success e ...)