montepy.input_parser.tokens module
- class montepy.input_parser.tokens.CellLexer
Bases:
ParticleLexer
A lexer for cell inputs that allows particles.
Added in version 0.2.0: This was added with the major parser rework.
- COMMENT(t)
A
c
style comment.
- COMPLEMENT = '\\#'
A complement character.
- DOLLAR_COMMENT(t)
A comment starting with a dollar sign.
- FILE_PATH = '[^><:"%,;=&\\(\\)|?*\\s]+'
A file path that covers basically anything that windows or linux allows.
- INTERPOLATE = '\\d*I'
An interpolate shortcut.
- JUMP = '\\d*J'
A jump shortcut.
- LOG_INTERPOLATE = '\\d*I?LOG'
A logarithmic interpolate shortcut.
- MESSAGE(t)
A message block.
- MULTIPLY = '[+\\-]?[0-9]+\\.?[0-9]*E?[+\\-]?[0-9]*M'
A multiply shortcut.
- NULL = '0+'
Zero number.
- NUMBER(t)
A float, or int number, including “fortran floats”.
- NUMBER_WORD(t)
An integer followed by letters.
Can be used for library numbers, as well as shortcuts.
E.g.:
80c
, or15i
.
- NUM_INTERPOLATE = '\\d+I'
An interpolate shortcut with a number.
- NUM_JUMP = '\\d+J'
A jump shortcut with a number.
- NUM_LOG_INTERPOLATE = '\\d+I?LOG'
A logarithmic interpolate shortcut.
- NUM_MULTIPLY = '[+\\-]?[0-9]+\\.?[0-9]*E?[+\\-]?[0-9]*M'
A multiply shortcut with a number.
- NUM_REPEAT = '\\d+R'
A repeat shortcut with a number.
- REPEAT = '\\d*R'
A repeat shortcut.
- SOURCE_COMMENT(t)
A source comment.
- SPACE(t)
Any white space.
- TALLY_COMMENT(t)
A tally Comment.
- TEXT(t)
General text that covers shortcuts and Keywords.
- THERMAL_LAW = '[a-z][a-z\\d/-]+\\.\\d+[a-z]'
An MCNP formatted thermal scattering law.
e.g.:
lwtr.20t
.
- ZAID(t)
A ZAID isotope definition in the MCNP format.
E.g.:
1001.80c
.
- begin(cls)
Begin a new lexer state
- error(t)
- static find_column(text, token)
Calculates the column number for the start of this token.
Uses 0-indexing.
Added in version 0.2.0: This was added with the major parser rework.
- Parameters:
text (str) – the text being lexed.
token (sly.lex.Token) – the token currently being processed
- ignore = ''
- literals = {'#', '&', '(', ')', '*', '+', ',', ':', '='}
- pop_state()
Pop a lexer state from the stack
- push_state(cls)
Push a new lexer state onto the stack
- reflags = 66
- regex_module = <module 're' from '/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/re/__init__.py'>
- tokenize(text, lineno=1, index=0)
- tokens = {'COMMENT', 'COMPLEMENT', 'DOLLAR_COMMENT', 'INTERPOLATE', 'JUMP', 'KEYWORD', 'LOG_INTERPOLATE', 'MESSAGE', 'MULTIPLY', 'NULL', 'NUMBER', 'PARTICLE', 'PARTICLE_DESIGNATOR', 'REPEAT', 'SPACE', 'TEXT', 'THERMAL_LAW', 'ZAID'}
- class montepy.input_parser.tokens.DataLexer
Bases:
ParticleLexer
A lexer for data inputs.
Added in version 0.2.0: This was added with the major parser rework.
- COMMENT(t)
A
c
style comment.
- COMPLEMENT = '\\#'
A complement character.
- DOLLAR_COMMENT(t)
A comment starting with a dollar sign.
- FILE_PATH = '[^><:"%,;=&\\(\\)|?*\\s]+'
A file path that covers basically anything that windows or linux allows.
- INTERPOLATE = '\\d*I'
An interpolate shortcut.
- JUMP = '\\d*J'
A jump shortcut.
- LOG_INTERPOLATE = '\\d*I?LOG'
A logarithmic interpolate shortcut.
- MESSAGE(t)
A message block.
- MULTIPLY = '[+\\-]?[0-9]+\\.?[0-9]*E?[+\\-]?[0-9]*M'
A multiply shortcut.
- NULL = '0+'
Zero number.
- NUMBER(t)
A float, or int number, including “fortran floats”.
- NUMBER_WORD(t)
An integer followed by letters.
Can be used for library numbers, as well as shortcuts.
E.g.:
80c
, or15i
.
- NUM_INTERPOLATE = '\\d+I'
An interpolate shortcut with a number.
- NUM_JUMP = '\\d+J'
A jump shortcut with a number.
- NUM_LOG_INTERPOLATE = '\\d+I?LOG'
A logarithmic interpolate shortcut.
- NUM_MULTIPLY = '[+\\-]?[0-9]+\\.?[0-9]*E?[+\\-]?[0-9]*M'
A multiply shortcut with a number.
- NUM_REPEAT = '\\d+R'
A repeat shortcut with a number.
- PARTICLE_SPECIAL(t)
Particle designators that are special characters.
- REPEAT = '\\d*R'
A repeat shortcut.
- SOURCE_COMMENT(t)
A source comment.
- SPACE(t)
Any white space.
- TALLY_COMMENT(t)
A tally Comment.
- TEXT(t)
General text that covers shortcuts and Keywords.
- THERMAL_LAW = '[a-z][a-z\\d/-]+\\.\\d+[a-z]'
An MCNP formatted thermal scattering law.
e.g.:
lwtr.20t
.
- ZAID(t)
A ZAID isotope definition in the MCNP format.
E.g.:
1001.80c
.
- begin(cls)
Begin a new lexer state
- error(t)
- static find_column(text, token)
Calculates the column number for the start of this token.
Uses 0-indexing.
Added in version 0.2.0: This was added with the major parser rework.
- Parameters:
text (str) – the text being lexed.
token (sly.lex.Token) – the token currently being processed
- ignore = ''
- literals = {'#', '&', '(', ')', '*', '+', ',', ':', '='}
- pop_state()
Pop a lexer state from the stack
- push_state(cls)
Push a new lexer state onto the stack
- reflags = 66
- regex_module = <module 're' from '/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/re/__init__.py'>
- tokenize(text, lineno=1, index=0)
- tokens = {'COMMENT', 'COMPLEMENT', 'DOLLAR_COMMENT', 'INTERPOLATE', 'JUMP', 'KEYWORD', 'LOG_INTERPOLATE', 'MESSAGE', 'MULTIPLY', 'NULL', 'NUMBER', 'PARTICLE', 'PARTICLE_DESIGNATOR', 'REPEAT', 'SOURCE_COMMENT', 'SPACE', 'TALLY_COMMENT', 'TEXT', 'THERMAL_LAW', 'ZAID'}
- class montepy.input_parser.tokens.MCNP_Lexer
Bases:
Lexer
Base lexer for all MCNP lexers.
Provides ~90% of the tokens definition.
Added in version 0.2.0: This was added with the major parser rework.
- COMMENT(t)
A
c
style comment.
- COMPLEMENT = '\\#'
A complement character.
- DOLLAR_COMMENT(t)
A comment starting with a dollar sign.
- FILE_PATH = '[^><:"%,;=&\\(\\)|?*\\s]+'
A file path that covers basically anything that windows or linux allows.
- INTERPOLATE = '\\d*I'
An interpolate shortcut.
- JUMP = '\\d*J'
A jump shortcut.
- LOG_INTERPOLATE = '\\d*I?LOG'
A logarithmic interpolate shortcut.
- MESSAGE(t)
A message block.
- MULTIPLY = '[+\\-]?[0-9]+\\.?[0-9]*E?[+\\-]?[0-9]*M'
A multiply shortcut.
- NULL = '0+'
Zero number.
- NUMBER(t)
A float, or int number, including “fortran floats”.
- NUMBER_WORD(t)
An integer followed by letters.
Can be used for library numbers, as well as shortcuts.
E.g.:
80c
, or15i
.
- NUM_INTERPOLATE = '\\d+I'
An interpolate shortcut with a number.
- NUM_JUMP = '\\d+J'
A jump shortcut with a number.
- NUM_LOG_INTERPOLATE = '\\d+I?LOG'
A logarithmic interpolate shortcut.
- NUM_MULTIPLY = '[+\\-]?[0-9]+\\.?[0-9]*E?[+\\-]?[0-9]*M'
A multiply shortcut with a number.
- NUM_REPEAT = '\\d+R'
A repeat shortcut with a number.
- REPEAT = '\\d*R'
A repeat shortcut.
- SOURCE_COMMENT(t)
A source comment.
- SPACE(t)
Any white space.
- TALLY_COMMENT(t)
A tally Comment.
- TEXT(t)
General text that covers shortcuts and Keywords.
- THERMAL_LAW = '[a-z][a-z\\d/-]+\\.\\d+[a-z]'
An MCNP formatted thermal scattering law.
e.g.:
lwtr.20t
.
- ZAID(t)
A ZAID isotope definition in the MCNP format.
E.g.:
1001.80c
.
- begin(cls)
Begin a new lexer state
- error(t)
- static find_column(text, token)
Calculates the column number for the start of this token.
Uses 0-indexing.
Added in version 0.2.0: This was added with the major parser rework.
- Parameters:
text (str) – the text being lexed.
token (sly.lex.Token) – the token currently being processed
- ignore = ''
- literals = {'#', '&', '(', ')', '*', '+', ',', ':', '='}
- pop_state()
Pop a lexer state from the stack
- push_state(cls)
Push a new lexer state onto the stack
- reflags = 66
- regex_module = <module 're' from '/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/re/__init__.py'>
- tokenize(text, lineno=1, index=0)
- tokens = {'COMMENT', 'COMPLEMENT', 'DOLLAR_COMMENT', 'FILE_PATH', 'INTERPOLATE', 'JUMP', 'KEYWORD', 'LIBRARY_SUFFIX', 'LOG_INTERPOLATE', 'MESSAGE', 'MULTIPLY', 'NULL', 'NUMBER', 'NUMBER_WORD', 'NUM_INTERPOLATE', 'NUM_JUMP', 'NUM_LOG_INTERPOLATE', 'NUM_MULTIPLY', 'NUM_REPEAT', 'PARTICLE', 'PARTICLE_SPECIAL', 'REPEAT', 'SOURCE_COMMENT', 'SPACE', 'SURFACE_TYPE', 'TALLY_COMMENT', 'TEXT', 'THERMAL_LAW', 'ZAID'}
- class montepy.input_parser.tokens.ParticleLexer
Bases:
MCNP_Lexer
A lexer for lexing an input that has particles in it.
Added in version 0.2.0: This was added with the major parser rework.
- COMMENT(t)
A
c
style comment.
- COMPLEMENT = '\\#'
A complement character.
- DOLLAR_COMMENT(t)
A comment starting with a dollar sign.
- FILE_PATH = '[^><:"%,;=&\\(\\)|?*\\s]+'
A file path that covers basically anything that windows or linux allows.
- INTERPOLATE = '\\d*I'
An interpolate shortcut.
- JUMP = '\\d*J'
A jump shortcut.
- LOG_INTERPOLATE = '\\d*I?LOG'
A logarithmic interpolate shortcut.
- MESSAGE(t)
A message block.
- MULTIPLY = '[+\\-]?[0-9]+\\.?[0-9]*E?[+\\-]?[0-9]*M'
A multiply shortcut.
- NULL = '0+'
Zero number.
- NUMBER(t)
A float, or int number, including “fortran floats”.
- NUMBER_WORD(t)
An integer followed by letters.
Can be used for library numbers, as well as shortcuts.
E.g.:
80c
, or15i
.
- NUM_INTERPOLATE = '\\d+I'
An interpolate shortcut with a number.
- NUM_JUMP = '\\d+J'
A jump shortcut with a number.
- NUM_LOG_INTERPOLATE = '\\d+I?LOG'
A logarithmic interpolate shortcut.
- NUM_MULTIPLY = '[+\\-]?[0-9]+\\.?[0-9]*E?[+\\-]?[0-9]*M'
A multiply shortcut with a number.
- NUM_REPEAT = '\\d+R'
A repeat shortcut with a number.
- REPEAT = '\\d*R'
A repeat shortcut.
- SOURCE_COMMENT(t)
A source comment.
- SPACE(t)
Any white space.
- TALLY_COMMENT(t)
A tally Comment.
- TEXT(t)
General text that covers shortcuts and Keywords.
- THERMAL_LAW = '[a-z][a-z\\d/-]+\\.\\d+[a-z]'
An MCNP formatted thermal scattering law.
e.g.:
lwtr.20t
.
- ZAID(t)
A ZAID isotope definition in the MCNP format.
E.g.:
1001.80c
.
- begin(cls)
Begin a new lexer state
- error(t)
- static find_column(text, token)
Calculates the column number for the start of this token.
Uses 0-indexing.
Added in version 0.2.0: This was added with the major parser rework.
- Parameters:
text (str) – the text being lexed.
token (sly.lex.Token) – the token currently being processed
- ignore = ''
- literals = {'#', '&', '(', ')', '*', '+', ',', ':', '='}
- pop_state()
Pop a lexer state from the stack
- push_state(cls)
Push a new lexer state onto the stack
- reflags = 66
- regex_module = <module 're' from '/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/re/__init__.py'>
- tokenize(text, lineno=1, index=0)
- tokens = {'COMMENT', 'COMPLEMENT', 'DOLLAR_COMMENT', 'INTERPOLATE', 'JUMP', 'KEYWORD', 'LOG_INTERPOLATE', 'MESSAGE', 'MULTIPLY', 'NULL', 'NUMBER', 'NUMBER_WORD', 'PARTICLE', 'PARTICLE_DESIGNATOR', 'REPEAT', 'SOURCE_COMMENT', 'SPACE', 'TALLY_COMMENT', 'TEXT', 'THERMAL_LAW', 'ZAID'}
- class montepy.input_parser.tokens.SurfaceLexer
Bases:
MCNP_Lexer
A lexer for Surface inputs.
The main difference is that
p
will be interpreted as a plane, and not a photon.Added in version 0.2.0: This was added with the major parser rework.
- COMMENT(t)
A
c
style comment.
- COMPLEMENT = '\\#'
A complement character.
- DOLLAR_COMMENT(t)
A comment starting with a dollar sign.
- FILE_PATH = '[^><:"%,;=&\\(\\)|?*\\s]+'
A file path that covers basically anything that windows or linux allows.
- INTERPOLATE = '\\d*I'
An interpolate shortcut.
- JUMP = '\\d*J'
A jump shortcut.
- LOG_INTERPOLATE = '\\d*I?LOG'
A logarithmic interpolate shortcut.
- MESSAGE(t)
A message block.
- MULTIPLY = '[+\\-]?[0-9]+\\.?[0-9]*E?[+\\-]?[0-9]*M'
A multiply shortcut.
- NULL = '0+'
Zero number.
- NUMBER(t)
A float, or int number, including “fortran floats”.
- NUMBER_WORD(t)
An integer followed by letters.
Can be used for library numbers, as well as shortcuts.
E.g.:
80c
, or15i
.
- NUM_INTERPOLATE = '\\d+I'
An interpolate shortcut with a number.
- NUM_JUMP = '\\d+J'
A jump shortcut with a number.
- NUM_LOG_INTERPOLATE = '\\d+I?LOG'
A logarithmic interpolate shortcut.
- NUM_MULTIPLY = '[+\\-]?[0-9]+\\.?[0-9]*E?[+\\-]?[0-9]*M'
A multiply shortcut with a number.
- NUM_REPEAT = '\\d+R'
A repeat shortcut with a number.
- REPEAT = '\\d*R'
A repeat shortcut.
- SOURCE_COMMENT(t)
A source comment.
- SPACE(t)
Any white space.
- TALLY_COMMENT(t)
A tally Comment.
- TEXT(t)
General text that covers shortcuts and Keywords.
- THERMAL_LAW = '[a-z][a-z\\d/-]+\\.\\d+[a-z]'
An MCNP formatted thermal scattering law.
e.g.:
lwtr.20t
.
- ZAID(t)
A ZAID isotope definition in the MCNP format.
E.g.:
1001.80c
.
- begin(cls)
Begin a new lexer state
- error(t)
- static find_column(text, token)
Calculates the column number for the start of this token.
Uses 0-indexing.
Added in version 0.2.0: This was added with the major parser rework.
- Parameters:
text (str) – the text being lexed.
token (sly.lex.Token) – the token currently being processed
- ignore = ''
- literals = {'#', '&', '(', ')', '*', '+', ',', ':', '='}
- pop_state()
Pop a lexer state from the stack
- push_state(cls)
Push a new lexer state onto the stack
- reflags = 66
- regex_module = <module 're' from '/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/re/__init__.py'>
- tokenize(text, lineno=1, index=0)
- tokens = {'COMMENT', 'COMPLEMENT', 'DOLLAR_COMMENT', 'INTERPOLATE', 'JUMP', 'KEYWORD', 'LOG_INTERPOLATE', 'MESSAGE', 'MULTIPLY', 'NULL', 'NUMBER', 'NUMBER_WORD', 'REPEAT', 'SPACE', 'SURFACE_TYPE', 'TEXT', 'THERMAL_LAW', 'ZAID'}