Microsoft.CodeAnalysis.Workspaces
The annotation normally used on nodes to request case correction.
Case corrects all names found in the provided document.
Case corrects all names found in the spans of any nodes annotated with the provided
annotation.
Case corrects all names found in the span.
Case corrects all names found in the provided spans.
Case correct only things that don't require semantic information
Case corrects all names found in the spans in the provided document.
Case corrects only things that don't require semantic information
Determine whether we can change the namespace for given in the document.
Linked documents are not supported, except for a regular document in a multi-targeting project,
where the container node must be consistent among all linked documents.
Here's the additional requirements on to use this service:
- If is a namespace declaration node:
1. Doesn't contain or is nested in other namespace declarations
2. The name of the namespace is valid (i.e. no errors)
3. No partial type declared in the namespace. Otherwise its multiple declarations will
end up in different namespace.
- If is a compilation unit node:
1. It must contain no namespace declaration
2. No partial type declared in the document. Otherwise its multiple declarations will
end up in different namespace.
- Otherwise, an will be thrown.
Returns only when all the requirements above are met.
While this service might be used by features that change namespace based on some property of the document
(e.g. Sync namespace refactoring), those logic is implemented by those individual features and isn't part
of the IChangeNamespaceService service.
Change namespace for given to the name specified by .
Everything declared in the will be moved to the new namespace.
Change will only be made if returns and
is a valid name for namespace. Use "" for to specify the global namespace.
An will be thrown if:
1. is not a namespace declaration or a compilation unit node.
2. is null or contains an invalid character.
If the declared namespace for is already identical to , then it will be
a no-op and original solution will be returned.
Using only the top level namespace declarations of a document, change all of them to the target namespace. Will only
use namespace containers considered valid by
Helper to add all the values of into
without causing any allocations or boxing of enumerators.
Additive classifications types supply additional context to other classifications.
Classifies the provided in the given . This will do this
using an appropriate if that can be found. will be returned if this fails.
Whether or not 'additive' classification spans are included in the
results or not. 'Additive' spans are things like 'this variable is static' or 'this variable is
overwritten'. i.e. they add additional information to a previous classification.
Classifies the provided in the given . This will do this
using an appropriate if that can be found. will be returned if this fails.
the current document.
The non-intersecting portions of the document to get classified spans for.
The options to use when getting classified spans.
Whether or not 'additive' classification spans are included in the
results or not. 'Additive' spans are things like 'this variable is static' or 'this variable is
overwritten'. i.e. they add additional information to a previous classification.
A cancellation token.
Ensures that all spans in do not go beyond the spans in . Any spans that are entirely outside of are replaced
with .
Adds all semantic parts to final parts, and adds all portions of that do not
overlap with any semantic parts as well. All final parts will be non-empty. Both and must be sorted.
Returns classified spans in ascending order.
s may have the same . This occurs when there are multiple
s for the same region of code. For example, a reference to a static method
will have two spans, one that designates it as a method, and one that designates it as static.
s may also have overlapping s. This occurs when there are
strings containing regex and/or escape characters.
Produce the classifications for the span of text specified. Classification should be
performed as quickly as possible, and should process the text in a lexical fashion.
This allows classification results to be shown to the user when a file is opened before
any additional compiler information is available for the text.
Important: The classification should not consider the context the text exists in, and how
that may affect the final classifications. This may result in incorrect classification
(i.e. identifiers being classified as keywords). These incorrect results will be patched
up when the lexical results are superseded by the calls to AddSyntacticClassifications.
This method is optional and only should be implemented by languages that support
syntax. If the language does not support syntax, callers should use
instead.
Produce the classifications for the span of text specified. The syntax of the document
can be accessed to provide more correct classifications. For example, the syntax can
be used to determine if a piece of text that looks like a keyword should actually be
considered an identifier in its current context.
the current document.
The non-intersecting portions of the document to add classified spans for.
The list to add the spans to.
A cancellation token.
Produce the classifications for the span of text specified. Semantics of the language can be used to
provide richer information for constructs where syntax is insufficient. For example, semantic information
can be used to determine if an identifier should be classified as a type, structure, or something else
entirely.
the current document.
The non-intersecting portions of the document to add classified spans for.
The options to use when adding spans.
The list to add the spans to.
A cancellation token.
This will not include classifications for embedded language constructs in string literals. For that use
.
Produce the classifications for embedded language string literals (e.g. Regex/Json strings) in the span of
text specified.
the current document.
The non-intersecting portions of the document to add classified spans for.
The options to use when adding spans.
The list to add the spans to.
A cancellation token.
Adjust a classification from a previous version of text accordingly based on the current
text. For example, if a piece of text was classified as an identifier in a previous version,
but a character was added that would make it into a keyword, then indicate that here.
This allows the classified to quickly fix up old classifications as the user types. These
adjustments are allowed to be incorrect as they will be superseded by calls to get the
syntactic and semantic classifications for this version later.
Determines the range of the documents that should be considered syntactically changed after an edit. In
language systems that can reuse major parts of a document after an edit, and which would not need to
recompute classifications for those reused parts, this can speed up processing on a host by not requiring
the host to reclassify all the source in view, but only the source that could have changed.
If determining this is not possible, or potentially expensive, can be returned to
indicate that the entire document should be considered changed and should be syntactically reclassified.
Implementations should attempt to abide by the provided timeout as much as they can, returning the best
information available at that point. As this can be called in performance critical scenarios, it is better
to return quickly with potentially larger change span (including that of the full document) rather than
spend too much time computing a very precise result.
This method is optional and only should be implemented by languages that support
syntax. If the language does not support syntax, callers should use
instead.
Gets the cached semantic classifications for the specified document and text spans.
The checksum of the solution containing the document.
The ID of the document to get classified spans for.
The non-intersecting portions of the document to get classified spans for.
The type of classified spans to get.
The options to use when getting classified spans.
Whether or not the document is fully loaded.
A cancellation token.
The classified spans for the specified document and text spans.
Tries to get cached semantic classifications for the specified document and the specified . Will return an empty array not able to.
The key of the document to get cached classified spans for.
The non-intersecting portions of the document to get cached classified spans for.
The type of classified spans to get.
Pass in . This will ensure that the cached
classifications are only returned if they match the content the file currently has.
A cancellation token.
The cached classified spans for the specified document and text spans.
For space efficiency, we encode classified spans as triples of ints in one large array. The
first int is the index of classification type in , and the
second and third ints encode the span.
For space efficiency, we encode classified spans as triples of ints in one large array. The
first int is the index of classification type in , and the
second and third ints encode the span.
The syntax node types this classifier is able to classify. This list must be the precise node types matches
(using n.GetType().Equals(t)). Subtyping type checks are not supported here.
The syntax token kinds this classifier is able to classify
This method will be called for all nodes that match the types specified by the property.
This method will be called for all tokens that match the kinds specified by the property.
Computes a syntactic text change range that determines the range of a document that was changed by an edit. The
portions outside this change range are guaranteed to be syntactically identical (see ). This algorithm is intended to be fast. It is
technically linear in the number of nodes and tokens that may need to examined. However, in practice, it should
operate in sub-linear time as it will bail the moment tokens don't match, and it's able to skip over matching
nodes fully without examining the contents of those nodes. This is intended for consumers that want a
reasonably accurate change range computer, but do not want to spend an inordinate amount of time getting the
most accurate and minimal result possible.
This computation is not guaranteed to be minimal. It may return a range that includes parts that are unchanged.
This means it is also legal for the change range to just specify the entire file was changed. The quality of
results will depend on how well the parsers did with incremental parsing, and how much time is given to do the
comparison. In practice, for large files (i.e. 15kloc) with standard types of edits, this generally returns
results in around 50-100 usecs on a i7 3GHz desktop.
This algorithm will respect the timeout provided to the best of abilities. If any information has been computed
when the timeout elapses, it will be returned.
Apply this annotation to a SyntaxNode to indicate a conflict may exist that requires user understanding and acknowledgment before taking action.
Apply this annotation to an appropriate Syntax element to request that it should be
navigated to by the user after a code action is applied. If present the host should
try to place the user's caret at the beginning of the element.
By using a this navigation location will be resilient
to the transformations performed by the infrastructure.
Namely it will be resilient to the formatting, reduction or case correction that
automatically occures. This allows a code action to specify a desired location for
the user caret to be placed without knowing what actual position that location will
end up at when the action is finally applied.
Apply this annotation to an appropriate SyntaxNode to request that it should be renamed by the user after the action.
Apply this annotation to a SyntaxNode to indicate that a warning message should be presented to the user.
An action produced by a or a .
Special tag that indicates that it's this is a privileged code action that is allowed to use the priority class.
Tag we use to convey that this code action should only be shown if it's in a host that allows for
non-document changes. For example if it needs to make project changes, or if will show host-specific UI.
Note: if the bulk of code action is just document changes, and it does some optional things beyond that
(like navigating the user somewhere) this should not be set. Such a code action is still usable in all
hosts and should be shown to the user. It's only if the code action can truly not function should this
tag be provided.
Currently, this also means that we presume that all 3rd party code actions do not require non-document
changes and we will show them all in all hosts.
A short title describing the action that may appear in a menu.
Two code actions are treated as equivalent if they have equal non-null values and were generated
by the same or .
Equivalence of code actions affects some Visual Studio behavior. For example, if multiple equivalent
code actions result from code fixes or refactorings for a single Visual Studio light bulb instance,
the light bulb UI will present only one code action from each set of equivalent code actions.
Additionally, a Fix All operation will apply only code actions that are equivalent to the original code action.
If two code actions that could be treated as equivalent do not have equal values, Visual Studio behavior
may be less helpful than would be optimal. If two code actions that should be treated as distinct have
equal values, Visual Studio behavior may appear incorrect.
Priority of this particular action within a group of other actions. Less relevant actions should override
this and specify a lower priority so that more important actions are easily accessible to the user. Returns
if not overridden.
Computes the group this code action should be presented in. Legal values
this can be must be between and .
Values outside of this range will be clamped to be within that range. Requests for may be downgraded to as they
poorly behaving high-priority items can cause a negative user experience.
Descriptive tags from .
These tags may influence how the item is displayed.
Child actions contained within this . Can be presented in a host to provide more
potential solution actions to a particular problem. To create a with nested
actions, use .
Code actions that should be presented as hyperlinks in the code action preview pane,
similar to FixAll scopes and Preview Changes but may not apply to ALL CodeAction types.
Bridge method for sdk. https://github.com/dotnet/roslyn-sdk/issues/1136 tracks removing this.
If this code action contains , this property provides a hint to hosts as to
whether or not it's ok to elide this code action and just present the nested actions instead. When a host
already has a lot of top-level actions to show, it should consider not inlining this action, to
keep the number of options presented to the user low. However, if there are few options to show to the
user, inlining this action could be beneficial as it would allow the user to see and choose one of the
nested options with less steps. To create a with nested actions, use .
Gets custom tags for the CodeAction.
Lazily set provider type that registered this code action.
Used for telemetry purposes only.
Used by the CodeFixService and CodeRefactoringService to add the Provider Name as a CustomTag.
The sequence of operations that define the code action.
The sequence of operations that define the code action.
The sequence of operations used to construct a preview.
Override this method if you want to implement a subclass that includes custom 's.
Override this method if you want to implement a subclass that includes custom 's. Prefer overriding this method over when computation is long running and progress should be
shown to the user.
Override this method if you want to implement a that has a set of preview operations that are different
than the operations produced by .
Computes all changes for an entire solution. Override this method if you want to implement a subclass that changes more than one document. Override to report progress
progress while computing the operations.
Computes all changes for an entire solution. Override this method if you want to implement a subclass that changes more than one document. Prefer overriding this method over when computation is long running and progress should be
shown to the user.
Computes changes for a single document. Override this method if you want to implement a subclass that changes a single document. Override to report progress
progress while computing the operations.
All code actions are expected to operate on solutions. This method is a helper to simplify the
implementation of for code actions that only need
to change one document.
If this code action does not support changing a single
document.
Computes changes for a single document. Override this method if you want to implement a subclass that changes a single document. Prefer overriding this method over when computation is long running and progress should be
shown to the user.
All code actions are expected to operate on solutions. This method is a helper to simplify the
implementation of for code actions that only need
to change one document.
If this code action does not support changing a single
document.
used by batch fixer engine to get new solution
Apply post processing steps to any 's.
A list of operations.
A cancellation token.
A new list of operations with post processing steps applied to any 's.
Apply post processing steps to solution changes, like formatting and simplification.
The solution changed by the .
A cancellation token
Apply post processing steps to a single document:
Reducing nodes annotated with
Formatting nodes annotated with
The document changed by the .
A cancellation token.
A document with the post processing changes applied.
Creates a for a change to a single .
Use this factory when the change is expensive to compute and should be deferred until requested.
Title of the .
Function to create the .
Optional value used to determine the equivalence of the with other s. See .
Code action priority
Creates a for a change to more than one within a .
Use this factory when the change is expensive to compute and should be deferred until requested.
Title of the .
Function to create the .
Optional value used to determine the equivalence of the with other s. See .
Creates a for a change to more than one within a .
Use this factory when the change is expensive to compute and should be deferred until requested.
Title of the .
Function to create the .
Optional value used to determine the equivalence of the with other s. See .
Creates a representing a group of code actions.
Title of the group.
The code actions within the group.
to allow inlining the members of the group into the parent;
otherwise, to require that this group appear as a group with nested actions.
Priority of the code action
Indicates if this CodeAction was created using one of the 'CodeAction.Create' factory methods.
This is used in to determine the appropriate type
name to log in the CodeAction telemetry.
We do cleanup in N serialized passes. This allows us to process all documents in parallel, while only forking
the solution N times *total* (instead of N times *per* document).
Priority of a particular code action produced by either a or a . Code actions use priorities to group themselves, with lower priority actions showing
up after higher priority ones. Providers should put less relevant code actions into lower priority buckets to
have them appear later in the UI, allowing the user to get to important code actions more quickly.
Lowest priority code actions. Will show up after priority items.
Low priority code action. Will show up after priority items.
Medium priority code action.
High priority code action. Note: High priority is simply a request on the part of a .
The core engine may automatically downgrade these items to priority.
Priority class that a particular or should
run at. Providers are run in priority order, allowing the results of higher priority providers to be computed
and shown to the user without having to wait on, or share computing resources with, lower priority providers.
Providers should choose lower priority classes if they are either:
- Very slow. Slow providers will impede computing results for other providers in the same priority class.
So running in a lower one means that fast providers can still get their results to users quickly.
- Less relevant. Providers that commonly show available options, but those options are less likely to be
taken, should run in lower priority groups. This helps ensure their items are still there when the user wants
them, but aren't as prominently shown.
Only lowest priority suppression and configuration fix providers should be run. Specifically, providers will be run. NOTE: This priority is reserved for suppression and
configuration fix providers and should not be used by regular code fix providers and refactoring providers.
Run the priority below priority. The provider may run slow, or its results may be
commonly less relevant for the user.
Run this provider at default priority. The provider will run in reasonable speeds and provide results that are
commonly relevant to the user.
Run this provider at high priority. Note: High priority is simply a request on the part of a provider. The core
engine may automatically downgrade these items to priority.
Clamps the value of (which could be any integer) to the legal range of values
present in .
A that can vary with user specified options. Override one of or to actually compute the operations for this action.
Gets the options to use with this code action.
This method is guaranteed to be called on the UI thread.
A cancellation token.
An implementation specific object instance that holds options for applying the code action.
Gets the 's for this given the specified options.
An object instance returned from a prior call to .
A cancellation token.
Override this method to compute the operations that implement this .
An object instance returned from a call to .
A cancellation token.
Override this method to compute the operations that implement this . Prefer
overriding this method over when computation
is long running and progress should be shown to the user.
A for applying solution changes to a workspace.
may return at most one
. Hosts may provide custom handling for
s, but if a requires custom
host behavior not supported by a single , then instead:
Implement a custom and s
Do not return any from
Directly apply any workspace edits
Handle any custom host behavior
Produce a preview for
by creating a custom or returning a single
to use the built-in preview mechanism
A for applying solution changes to a workspace.
may return at most one
. Hosts may provide custom handling for
s, but if a requires custom
host behavior not supported by a single , then instead:
Implement a custom and s
Do not return any from
Directly apply any workspace edits
Handle any custom host behavior
Produce a preview for
by creating a custom or returning a single
to use the built-in preview mechanism
Represents a single operation of a multi-operation code action.
A short title describing of the effect of the operation.
Called by the host environment to apply the effect of the operation.
This method is guaranteed to be called on the UI thread.
Called by the host environment to apply the effect of the operation.
This method is guaranteed to be called on the UI thread.
Operations may make all sorts of changes that may not be appropriate during testing
(like popping up UI). So, by default, we don't apply them unless the operation asks
for that to happen.
A code action operation for requesting a document be opened in the host environment.
A code action operation for requesting a document be opened in the host environment.
Represents a preview operation for generating a custom user preview for the operation.
Gets a custom preview control for the operation.
If preview is null and is non-null, then is used to generate the preview.
Get the proper start position based on the span marker type.
Get the proper end position based on the span marker type.
Inject annotations into the node so that it can re-calculate spans for each code cleaner after each tree transformation.
Make sure annotations are positioned outside of any spans. If not, merge two adjacent spans to one.
Retrieves four tokens around span like below.
[previousToken][startToken][SPAN][endToken][nextToken]
Adjust provided span to align to either token's start position or end position.
Find closest token (including one in structured trivia) right of given position
Find closest token (including one in structured trivia) left of given position
Enum that indicates type of span marker
Normal case
Span starts at the beginning of the tree
Span ends at the end of the tree
Internal annotation type to mark span location in the tree.
Indicates the current marker type
Indicates how to find the other side of the span marker if it is missing
Static CodeCleaner class that provides default code cleaning behavior.
Return default code cleaners for a given document.
This can be modified and given to the Cleanup method to provide different cleaners.
Cleans up the whole document.
Optionally you can provide your own options and code cleaners. Otherwise, the default will be used.
Cleans up the document marked with the provided annotation.
Optionally you can provide your own options and code cleaners. Otherwise, the default will be used.
Clean up the provided span in the document.
Optionally you can provide your own options and code cleaners. Otherwise, the default will be used.
Clean up the provided spans in the document.
Optionally you can provide your own options and code cleaners. Otherwise, the default will be used.
Clean up the provided span in the node.
This will only cleanup stuff that doesn't require semantic information.
Clean up the provided spans in the node.
This will only cleanup stuff that doesn't require semantic information.
Internal code cleanup service interface.
This is not supposed to be used directly. It just provides a way to get the right service from each language.
Returns the default code cleaners.
This will run all provided code cleaners in an order that is given to the method.
This will run all provided code cleaners in an order that is given to the method.
This will do cleanups that don't require any semantic information.
Specifies the exact type of the code cleanup exported.
A code cleaner that requires semantic information to do its job.
Returns the name of this provider.
This should apply its code clean up logic to the spans of the document.
This will run all provided code cleaners in an order that is given to the method.
This will do cleanups that don't require any semantic information
Default implementation of a that efficiently handles the dispatch logic for fixing
entire solutions. Used by and .
Helper methods for DocumentBasedFixAllProvider common to code fixes and refactorings.
An enum to distinguish if we are performing a Fix all occurences for a code fix or a code refactoring.
Fix all occurrences logging.
Contains computed information for a given , such as supported diagnostic Ids and supported .
Gets an optional for the given code fix provider or suppression fix provider.
Gets an optional for the given code fix provider.
Gets an optional for the given code refactoring provider.
Gets an optional for the given suppression fix provider.
Represents a FixAllContext for code fixes or refactorings.
Represents a FixAllProvider for code fixes or refactorings.
Represents internal FixAllState for code fixes or refactorings.
Underlying code fix provider or code refactoring provider for the fix all occurrences fix.
Language service for mapping spans for specific s for fix all occurences code fix.
Every language that wants to support span based FixAll scopes, such as ,
, should implement this language service. Non-span based FixAll scopes,
such as , and
do not require such a span mapping, and this service will never be called for these scopes. This language service
does not need to be implemented by languages that only intend to support these non-span based FixAll scopes.
For the given and in the given ,
returns the documents and fix all spans within each document that need to be fixed.
Note that this API is only invoked for span based FixAll scopes, i.e.
and .
Represents a single fix. This is essentially a tuple
that holds on to a and the set of
s that this will fix.
This is the diagnostic that will show up in the preview pane header when a particular fix
is selected in the light bulb menu. We also group all fixes with the same
together (into a single SuggestedActionSet) in the light bulb menu.
A given fix can fix one or more diagnostics. However, our light bulb UI (preview pane, grouping
of fixes in the light bulb menu etc.) currently keeps things simple and pretends that
each fix fixes a single .
Implementation-wise the is always the first diagnostic that
the supplied when registering the fix (). This could change
in the future, if we decide to change the UI to depict the true mapping between fixes and diagnostics
or if we decide to use some other heuristic to determine the .
Context for code fixes provided by a .
Document corresponding to the to fix.
For code fixes that support non-source documents by providing a non-default value for
, this property will
throw an . Such fixers should use the
property instead.
TextDocument corresponding to the to fix.
This property should be used instead of property by
code fixes that support non-source documents by providing a non-default value for
Text span within the or to fix.
Diagnostics to fix.
NOTE: All the diagnostics in this collection have the same .
CancellationToken.
Creates a code fix context to be passed into method.
Document to fix.
Text span within the to fix.
Diagnostics to fix.
All the diagnostics must have the same .
Additionally, the of each diagnostic must be in the set of the of the associated .
Delegate to register a fixing a subset of diagnostics.
Cancellation token.
Throws this exception if any of the arguments is null.
Throws this exception if the given is empty,
has a null element or has an element whose span is not equal to .
Creates a code fix context to be passed into method.
Text document to fix.
Text span within the to fix.
Diagnostics to fix.
All the diagnostics must have the same .
Additionally, the of each diagnostic must be in the set of the of the associated .
Delegate to register a fixing a subset of diagnostics.
Cancellation token.
Throws this exception if any of the arguments is null.
Throws this exception if the given is empty,
has a null element or has an element whose span is not equal to .
Creates a code fix context to be passed into method.
Document to fix.
Diagnostic to fix.
The of this diagnostic must be in the set of the of the associated .
Delegate to register a fixing a subset of diagnostics.
Cancellation token.
Throws this exception if any of the arguments is null.
Creates a code fix context to be passed into method.
Text document to fix.
Diagnostic to fix.
The of this diagnostic must be in the set of the of the associated .
Delegate to register a fixing a subset of diagnostics.
Cancellation token.
Throws this exception if any of the arguments is null.
Add supplied to the list of fixes that will be offered to the user.
The that will be invoked to apply the fix.
The subset of being addressed / fixed by the .
Add supplied to the list of fixes that will be offered to the user.
The that will be invoked to apply the fix.
The subset of being addressed / fixed by the .
Add supplied to the list of fixes that will be offered to the user.
The that will be invoked to apply the fix.
The subset of being addressed / fixed by the .
Implement this type to provide fixes for source code problems.
Remember to use so the host environment can offer your fixes in a UI.
A list of diagnostic IDs that this provider can provide fixes for.
Computes one or more fixes for the specified .
A containing context information about the diagnostics to fix.
The context must only contain diagnostics with a included in the for the current provider.
Gets an optional that can fix all/multiple occurrences of diagnostics fixed by this code fix provider.
Return null if the provider doesn't support fix all/multiple occurrences.
Otherwise, you can return any of the well known fix all providers from or implement your own fix all provider.
Computes the group this provider should be considered to run at. Legal values
this can be must be between and .
Values outside of this range will be clamped to be within that range. Requests for may be downgraded to as they
poorly behaving high-priority providers can cause a negative user experience.
Priority class this refactoring provider should run at. Returns if not overridden. Slower, or less relevant, providers should
override this and return a lower value to not interfere with computation of normal priority providers.
Use this attribute to declare a implementation so that it can be discovered by the host.
Optional name of the .
The source languages this provider can provide fixes for. See .
The document kinds for which this provider can provide code fixes. See .
By default, the provider supports code fixes only for source documents, .
Provide string representation of the documents kinds for this property, for example:
DocumentKinds = new[] { nameof(TextDocumentKind.AdditionalDocument) }
The document extensions for which this provider can provide code fixes.
Each extension string must include the leading period, for example, ".txt", ".xaml", ".editorconfig", etc.
By default, this value is null and the document extension is not considered to determine applicability of code fixes.
Attribute constructor used to specify automatic application of a code fix provider.
One language to which the code fix provider applies.
Additional languages to which the code fix provider applies. See .
Helper class for "Fix all occurrences" code fix providers.
Returns all the changed documents produced by fixing the list of provided . The documents will be returned such that fixed documents for a later
diagnostic will appear later than those for an earlier diagnostic.
Take all the changes made to a particular document and determine the text changes caused by each one. Take
those individual text changes and attempt to merge them together in order into .
Provides a base class to write a that fixes documents independently. This type
should be used instead of in the case where fixes for a only affect the the diagnostic was produced in.
This type provides suitable logic for fixing large solutions in an efficient manner. Projects are serially
processed, with all the documents in the project being processed in parallel. Diagnostics are computed for the
project and then appropriately bucketed by document. These are then passed to for implementors to process.
Provides a base class to write a that fixes documents independently. This type
should be used instead of in the case where fixes for a only affect the the diagnostic was produced in.
This type provides suitable logic for fixing large solutions in an efficient manner. Projects are serially
processed, with all the documents in the project being processed in parallel. Diagnostics are computed for the
project and then appropriately bucketed by document. These are then passed to for implementors to process.
Produce a suitable title for the fix-all this type creates in . Override this if customizing that title is desired.
Fix all the present in . The document returned
will only be examined for its content (e.g. it's or . No
other aspects of (like it's properties), or changes to the or
it points at will be considered.
The context for the Fix All operation.
The document to fix.
The diagnostics to fix in the document.
The new representing the content fixed document.
-or-
, if no changes were made to the document.
Context for "Fix all occurrences" code fixes provided by a .
Context for "Fix all occurrences" code fixes provided by a .
Context for "Fix all occurrences" code fixes provided by a .
Solution to fix all occurrences.
Project within which fix all occurrences was triggered.
Document within which fix all occurrences was triggered, null if the is scoped to a project.
Underlying which triggered this fix all.
to fix all occurrences.
Diagnostic Ids to fix.
Note that , and methods
return only diagnostics whose IDs are contained in this set of Ids.
The value expected of a participating in this fix all.
CancellationToken for fix all session.
Progress sink for reporting the progress of a fix-all operation.
Creates a new .
Use this overload when applying fix all to a diagnostic with a source location.
This overload cannot be used with or
value for the .
For those fix all scopes, use the constructor that
takes a 'diagnosticSpan' parameter to identify the containing member or type based
on this span.
Document within which fix all occurrences was triggered.
Underlying which triggered this fix all.
to fix all occurrences.
The value expected of a participating in this fix all.
Diagnostic Ids to fix.
to fetch document/project diagnostics to fix in a .
Cancellation token for fix all computation.
Creates a new with an associated .
Use this overload when applying fix all to a diagnostic with a source location and
using or
for the . When using other fix all scopes,
is not required and other constructor which does not take a diagnostic span can be used instead.
Document within which fix all occurrences was triggered.
Span for the diagnostic for which fix all occurrences was triggered.
Underlying which triggered this fix all.
to fix all occurrences.
The value expected of a participating in this fix all.
Diagnostic Ids to fix.
to fetch document/project diagnostics to fix in a .
Cancellation token for fix all computation.
Creates a new .
Use this overload when applying fix all to a diagnostic with no source location, i.e. .
Project within which fix all occurrences was triggered.
Underlying which triggered this fix all.
to fix all occurrences.
The value expected of a participating in this fix all.
Diagnostic Ids to fix.
to fetch document/project diagnostics to fix in a .
Cancellation token for fix all computation.
Gets all the diagnostics in the given document filtered by .
Gets all the diagnostics in the given for the given filtered by .
Gets all the project-level diagnostics, i.e. diagnostics with no source location, in the given project filtered by .
Gets all the diagnostics in the given project filtered by .
This includes both document-level diagnostics for all documents in the given project and project-level diagnostics, i.e. diagnostics with no source location, in the given project.
Gets all the project diagnostics in the given project filtered by .
If is false, then returns only project-level diagnostics which have no source location.
Otherwise, returns all diagnostics in the project, including the document diagnostics for all documents in the given project.
Gets a new with the given cancellationToken.
Diagnostic provider to fetch document/project diagnostics to fix in a .
Gets all the diagnostics to fix in the given document in a .
Gets all the project-level diagnostics to fix, i.e. diagnostics with no source location, in the given project in a .
Gets all the diagnostics to fix in the given project in a .
This includes both document-level diagnostics for all documents in the given project and project-level diagnostics, i.e. diagnostics with no source location, in the given project.
Diagnostic provider to fetch document/project diagnostics to fix in a ,
which supports a
method to compute diagnostics for a given span within a document.
We need to compute diagnostics for a span when applying a fix all operation in
and scopes.
A regular will compute diagnostics for the entire document and filter out
diagnostics outside the span as a post-filtering step.
A can do this more efficiently by implementing the
method to compute
the diagnostics only for the given 'fixAllSpan' upfront.
Gets all the diagnostics to fix for the given in the given in a .
Implement this abstract type to provide fix all/multiple occurrences code fixes for source code problems.
Alternatively, you can use any of the well known fix all providers from .
Gets the supported scopes for fixing all occurrences of a diagnostic.
By default, it returns the following scopes:
(a)
(b) and
(c)
Gets the diagnostic IDs for which fix all occurrences is supported.
By default, it returns for the given .
Original code fix provider that returned this fix all provider from method.
Gets fix all occurrences fix for the given fixAllContext.
Create a that fixes documents independently. This should be used instead of
in the case where fixes for a
only affect the the diagnostic was produced in.
Callback that will the fix diagnostics present in the provided document. The document returned will only be
examined for its content (e.g. it's or . No other aspects
of it (like attributes), or changes to the or it points at
will be considered.
Create a that fixes documents independently for the given .
This should be used instead of in the case where
fixes for a only affect the the diagnostic was produced in.
Callback that will the fix diagnostics present in the provided document. The document returned will only be
examined for its content (e.g. it's or . No other aspects
of it (like attributes), or changes to the or it points at
will be considered.
Supported s for the fix all provider.
Note that is not supported by the
and should not be part of the supported scopes.
Indicates scope for "Fix all occurrences" code fixes provided by each .
Scope to fix all occurences of diagnostic(s) in the entire document.
Scope to fix all occurences of diagnostic(s) in the entire project.
Scope to fix all occurences of diagnostic(s) in the entire solution.
Custom scope to fix all occurences of diagnostic(s). This scope can
be used by custom s and custom code fix engines.
Scope to fix all occurrences of diagnostic(s) in the containing member
relative to the trigger span for the original code fix.
Scope to fix all occurrences of diagnostic(s) in the containing type
relative to the trigger span for the original code fix.
Diagnostic provider to fetch document/project diagnostics to fix in a .
A dummy fix all provider to represent a no-change provider.
This is only used by public constructors for ,
our internal code fix engine always creates a FixAllContext with a non-null
FixAllProvider. Using a for the public constructors
helps us to avoid a nullable .
Helper to merge many disparate text changes to a single document together into a total set of changes.
Try to merge the changes made to into the tracked changes. If there is any
conflicting change in with existing changes, then no changes are added.
Try to merge the changes made to all the documents in in order into the
tracked changes. If there is any conflicting changes with existing changes for a particular document, then
no changes will be added for it.
Contains well known implementations of .
Default batch fix all provider.
This provider batches all the individual diagnostic fixes across the scope of fix all action,
computes fixes in parallel and then merges all the non-conflicting fixes into a single fix all code action.
This fixer supports fixes for the following fix all scopes:
, ,
and .
The batch fix all provider only batches operations (i.e. ) of type
present within the individual diagnostic fixes. Other types of
operations present within these fixes are ignored.
Provides suppression or configuration code fixes.
Returns true if the given diagnostic can be configured, suppressed or unsuppressed by this provider.
Gets one or more add suppression, remove suppression, or configuration fixes for the specified diagnostics represented as a list of 's.
A list of zero or more potential 'es. It is also safe to return null if there are none.
Gets one or more add suppression, remove suppression, or configuration fixes for the specified no-location diagnostics represented as a list of 's.
A list of zero or more potential 'es. It is also safe to return null if there are none.
Gets an optional that can fix all/multiple occurrences of diagnostics fixed by this fix provider.
Return null if the provider doesn't support fix all/multiple occurrences.
Otherwise, you can return any of the well known fix all providers from or implement your own fix all provider.
Helper type for s that need to provide 'fix all' support in a document, by operate by
applying one fix at a time, then recomputing the work to do after that fix is applied. While this is not generally
desirable from a performance perspective (due to the costs of forking a document after each fix), it is sometimes
necessary as individual fixes can impact the code so substantially that successive fixes may no longer apply, or may
have dramatically different data to work with before the fix. For example, if one fix removes statements entirely
that another fix was contained in.
Subclasses must override this to actually provide the fix for a particular diagnostic. The implementation will
be passed the current (containing the changes from all prior fixes), the
the in that document, for the current diagnostic being fixed. And the for that diagnostic. The diagnostic itself is not passed along as it was
computed with respect to the original user document, and as such its and will not be correct.
Use this helper to register multiple fixes () each of which addresses / fixes the same supplied .
Use this helper to register multiple fixes () each of which addresses / fixes the same set of supplied .
Fixes all in the specified .
The fixes are applied to the 's syntax tree via .
The implementation may query options of any document in the 's solution.
Whether or not this diagnostic should be included when performing a FixAll. This is useful for providers that
create multiple diagnostics for the same issue (For example, one main diagnostic and multiple 'faded out code'
diagnostics). FixAll can be invoked from any of those, but we'll only want perform an edit for only one
diagnostic for each of those sets of diagnostics.
This overload differs from in that it also passes along
the in case that would be useful (for example if the is used.
Only one of these two overloads needs to be overridden if you want to customize behavior.
Whether or not this diagnostic should be included when performing a FixAll. This is useful for providers that
create multiple diagnostics for the same issue (For example, one main diagnostic and multiple 'faded out code'
diagnostics). FixAll can be invoked from any of those, but we'll only want perform an edit for only one
diagnostic for each of those sets of diagnostics.
By default, all diagnostics will be included in fix-all unless they are filtered out here. If only the
diagnostic needs to be queried to make this determination, only this overload needs to be overridden. However,
if information from is needed (for example ), then should be overridden instead.
Only one of these two overloads needs to be overridden if you want to customize behavior.
Context for code refactorings provided by a .
Document corresponding to the to refactor.
For code refactorings that support non-source documents by providing a non-default value for
, this property will
throw an . Such refactorings should use the
property instead.
TextDocument corresponding to the to refactor.
This property should be used instead of property by
code refactorings that support non-source documents by providing a non-default value for
Text span within the or to refactor.
CancellationToken.
Creates a code refactoring context to be passed into method.
Creates a code refactoring context to be passed into method.
Creates a code refactoring context to be passed into method.
Add supplied to the list of refactorings that will be offered to the user.
The that will be invoked to apply the refactoring.
Add supplied applicable to to the list of refactorings that will be offered to the user.
The that will be invoked to apply the refactoring.
The within original document the is applicable to.
should represent a logical section within the original document that the is
applicable to. It doesn't have to precisely represent the exact that will get changed.
Most refactorings will have the kind. This allows us to draw
attention to Extract and Inline refactorings.
When new values are added here we should account for them in the `CodeActionHelpers` class.
Inherit this type to provide source code refactorings.
Remember to use so the host environment can offer your refactorings in a UI.
Computes one or more refactorings for the specified .
Gets an optional that can apply multiple occurrences of code refactoring(s)
registered by this code refactoring provider across the supported s.
Return null if the provider doesn't support fix all operation.
Gets the indicating what kind of refactoring it is.
Computes the group this provider should be considered to run at. Legal values
this can be must be between and .
Values outside of this range will be clamped to be within that range. Requests for may be downgraded to as they
poorly behaving high-priority providers can cause a negative user experience.
Priority class this refactoring provider should run at. Returns if not overridden. Slower, or less relevant, providers should
override this and return a lower value to not interfere with computation of normal priority providers.
Use this attribute to declare a implementation so that it can be discovered by the host.
The name of the .
The source languages for which this provider can provide refactorings. See .
The document kinds for which this provider can provide refactorings. See .
By default, the provider supports refactorings only for source documents, .
The document extensions for which this provider can provide refactorings.
Each extension string must include the leading period, for example, ".txt", ".xaml", ".editorconfig", etc.
By default, this value is null and the document extension is not considered to determine applicability of refactorings.
Attribute constructor used to specify availability of a code refactoring provider.
One language to which the code refactoring provider applies.
Additional languages to which the code refactoring provider applies. See .
Provides a base class to write a for refactorings that fixes documents independently.
This type should be used in the case where the code refactoring(s) only affect individual s.
This type provides suitable logic for fixing large solutions in an efficient manner. Projects are serially
processed, with all the documents in the project being processed in parallel.
is invoked for each document for implementors to process.
TODO: Make public, tracked with https://github.com/dotnet/roslyn/issues/60703
Provides a base class to write a for refactorings that fixes documents independently.
This type should be used in the case where the code refactoring(s) only affect individual s.
This type provides suitable logic for fixing large solutions in an efficient manner. Projects are serially
processed, with all the documents in the project being processed in parallel.
is invoked for each document for implementors to process.
TODO: Make public, tracked with https://github.com/dotnet/roslyn/issues/60703
Produce a suitable title for the fix-all this type creates in . Override this if customizing that title is desired.
Apply fix all operation for the code refactoring in the
for the given . The document returned will only be examined for its content
(e.g. it's or . No other aspects of document (like it's properties),
or changes to the or it points at will be considered.
The context for the Fix All operation.
The document to fix.
The spans to fix in the document. If not specified, entire document needs to be fixedd.
The new representing the content fixed document.
-or-
, if no changes were made to the document.
Attempts to apply fix all operations returning, for each updated document, either the new syntax root for that
document or its new text. Syntax roots are returned for documents that support them, and are used to perform a
final cleanup pass for formatting/simplification/etc. Text is returned for documents that don't support syntax.
Context for "Fix all occurrences" for code refactorings provided by each .
TODO: Make public, tracked with https://github.com/dotnet/roslyn/issues/60703
Document within which fix all occurrences was triggered.
Underlying which triggered this fix all.
to fix all occurrences.
The value expected of a participating in this fix all.
CancellationToken for fix all session.
Project to fix all occurrences.
Note that this property will always be the containing project of
for publicly exposed FixAllContext instance. However, we might create an intermediate FixAllContext
with null and non-null Project, so we require this internal property for intermediate computation.
Gets the spans to fix by document for the for this fix all occurences fix.
If no spans are specified, it indicates the entire document needs to be fixed.
Implement this abstract type to provide fix all occurrences support for code refactorings.
TODO: Make public, tracked with https://github.com/dotnet/roslyn/issues/60703
Gets the supported scopes for applying multiple occurrences of a code refactoring.
By default, it returns the following scopes:
(a)
(b) and
(c)
Gets fix all occurrences fix for the given fixAllContext.
Create a that fixes documents independently.
This can be used in the case where refactoring(s) registered by this provider
only affect a single .
Callback that will apply the refactorings present in the provided document. The document returned will only be
examined for its content (e.g. it's or . No other aspects
of it (like attributes), or changes to the or it points at
will be considered.
Create a that fixes documents independently.
This can be used in the case where refactoring(s) registered by this provider
only affect a single .
Callback that will apply the refactorings present in the provided document. The document returned will only be
examined for its content (e.g. it's or . No other aspects
of it (like attributes), or changes to the or it points at
will be considered.
Supported s for the fix all provider.
Note that is not supported by the
and should not be part of the supported scopes.
Original selection span from which FixAll was invoked.
This is used in
to compute fix all spans for
and scopes.
Gets the spans to fix by document for the for this fix all occurences fix.
If no spans are specified, it indicates the entire document needs to be fixed.
Extractor function that retrieves all nodes that should be considered for extraction of given current node.
The rationale is that when user selects e.g. entire local declaration statement [|var a = b;|] it is reasonable
to provide refactoring for `b` node. Similarly for other types of refactorings.
Should also return given node.
Extractor function that checks and retrieves all nodes current location is in a header.
Contains helpers related to asking intuitive semantic questions about a users intent
based on the position of their caret or span of their selection.
True if the user is on a blank line where a member could go inside a type declaration.
This will be between members and not ever inside a member.
Returns an array of instances for refactoring given specified selection
in document. determines if the returned nodes will can have empty spans
or not.
A instance is returned if: - Selection is zero-width and inside/touching
a Token with direct parent of type . - Selection is zero-width and
touching a Token whose ancestor of type ends/starts precisely on current
selection. - Selection is zero-width and in whitespace that corresponds to a Token whose direct ancestor is
of type of type . - Selection is zero-width and in a header (defined by
ISyntaxFacts helpers) of an node of type of type . - Token whose direct
parent of type is selected. - Selection is zero-width and wanted node is
an expression / argument with selection within such syntax node (arbitrarily deep) on its first line. -
Whole node of a type is selected.
Attempts extracting a Node of type for each Node it considers (see
above). E.g. extracts initializer expressions from declarations and assignments, Property declaration from
any header node, etc.
Note: this function trims all whitespace from both the beginning and the end of given . The trimmed version is then used to determine relevant . It also
handles incomplete selections of tokens gracefully. Over-selection containing leading comments is also
handled correctly.
Public representation of a code style option value. Should only be used for public API.
Internally the value is represented by .
Represents a code style option and an associated notification option. Supports
being instantiated with T as a or an enum type.
CodeStyleOption also has some basic support for migration a option
forward to an enum type option. Specifically, if a previously serialized
bool-CodeStyleOption is then deserialized into an enum-CodeStyleOption then 'false'
values will be migrated to have the 0-value of the enum, and 'true' values will be
migrated to have the 1-value of the enum.
Similarly, enum-type code options will serialize out in a way that is compatible with
hosts that expect the value to be a boolean. Specifically, if the enum value is 0 or 1
then those values will write back as false/true.
Represents a code style option and an associated notification option. Supports
being instantiated with T as a or an enum type.
CodeStyleOption also has some basic support for migration a option
forward to an enum type option. Specifically, if a previously serialized
bool-CodeStyleOption is then deserialized into an enum-CodeStyleOption then 'false'
values will be migrated to have the 0-value of the enum, and 'true' values will be
migrated to have the 1-value of the enum.
Similarly, enum-type code options will serialize out in a way that is compatible with
hosts that expect the value to be a boolean. Specifically, if the enum value is 0 or 1
then those values will write back as false/true.
Name for the notification option.
Offers different notification styles for enforcing
a code style. Under the hood, it simply maps to
Offers different notification styles for enforcing
a code style. Under the hood, it simply maps to
Notification option to disable or suppress an option with .
Notification option for a silent or hidden option with .
Notification option for a suggestion or an info option with .
Notification option for a warning option with .
Notification option for an error option with .
Given an editor-config code-style-option, gives back the core value part of the
option. For example, if the option is "true:error" or "true" then "true" will be returned
in .
Given an editor-config code-style-option, gives back the constituent parts of the
option. For example, if the option is "true:error" then "true" will be returned
in and will be returned
in . Note that users are allowed to not provide
a NotificationOption, so will default to .
Internal representation of a code style option value. Should be used throughout Roslyn.
The internal values are translated to the public ones (ICodeStyleOption) at the public entry points.
Creates a new from a specified .
The type of the serialized data does not match the type of or the format of the serialized data is invalid.
When user preferences are not yet set for a style, we fall back to the default value.
One such default(s), is that the feature is turned on, so that codegen consumes it,
but with silent enforcement, so that the user is not prompted about their usage.
Use singletons for most common values.
This option says if we should simplify away the . or . in field access expressions.
This option says if we should simplify away the . or . in property access expressions.
This option says if we should simplify away the . or . in method access expressions.
This option says if we should simplify away the . or . in event access expressions.
This option says if we should prefer keyword for Intrinsic Predefined Types in Declarations
This option says if we should prefer keyword for Intrinsic Predefined Types in Member Access Expression
Options that we expect the user to set in editorconfig.
Note: the order of this enum is important. We originally only supported two values,
and we encoded this as a bool with 'true = WhenPossible' and 'false = never'. To
preserve compatibility we map the false value to 0 and the true value to 1. All new
values go after these.
Preferences if a foreach statement is allowed to have an explicit cast not visible in source.
Hidden explicit casts are not allowed. In any location where one might be emitted, users must supply their
own explicit cast to make it apparent that the code may fail at runtime.
Hidden casts are allowed on legacy APIs but not allowed on strongly-typed modern APIs. An API is considered
legacy if enumerating it would produce values of type or itself does not implement . These represent APIs that existed prior to the widespread adoption of generics and
are the reason the language allowed this explicit conversion to not be stated for convenience. With
generics though it is more likely that an explicit cast emitted is an error and the user put in an incorrect
type errantly and would benefit from an alert about the issue.
Prefer namespace N { }
Prefer namespace N;
Preferences for flagging unused parameters.
Assignment preference for unused values from expression statements and assignments.
This option describes the naming rules that should be applied to specified categories of symbols,
and the level to which those rules should be enforced.
Options that we expect the user to set in editorconfig.
enum for each analysis kind.
Provides and caches information about diagnostic analyzers such as ,
instance, s.
Thread-safe.
Supported descriptors of each .
Holds on instances weakly so that we don't keep analyzers coming from package references alive.
They need to be released when the project stops referencing the analyzer.
The purpose of this map is to avoid multiple calls to that might return different values
(they should not but we need a guarantee to function correctly).
Supported suppressions of each .
Holds on instances weakly so that we don't keep suppressors coming from package references alive.
They need to be released when the project stops referencing the suppressor.
The purpose of this map is to avoid multiple calls to that might return different values
(they should not but we need a guarantee to function correctly).
Lazily populated map from diagnostic IDs to diagnostic descriptor.
If same diagnostic ID is reported by multiple descriptors, a null value is stored in the map for that ID.
Returns of given .
Returns of given .
Returns of given
that are not compilation end descriptors.
Returns of given
that are compilation end descriptors.
Returns true if given has a compilation end descriptor
that is reported in the Compilation end action.
Determine whether collection of telemetry is allowed for given .
Language name () or null if the diagnostic is not associated with source code.
Properties for a diagnostic generated by an explicit build.
Create a host/VS specific diagnostic with the given descriptor and message arguments for the given project.
Note that diagnostic created through this API cannot be suppressed with in-source suppression due to performance reasons (see the PERF remark below for details).
Returns true if the diagnostic was generated by an explicit build, not live analysis.
Path to where the diagnostic was originally reported. May be a path to a document in a project, or the
project file itself. This should only be used by clients that truly need to know the original location a
diagnostic was reported at, ignoring things like #line directives or other systems that would map the
diagnostic to a different file or location. Most clients should instead use ,
which contains the final location (file and span) that the diagnostic should be considered at.
Document the diagnostic is associated with. May be null if this is a project diagnostic.
Path and span where the diagnostic has been finally mapped to. If no mapping happened, this will be equal
to . The of this value will be the
fully normalized file path where the diagnostic is located at.
Return a new location with the same as this, but with updated and corresponding to the respection locations of
within .
Scope for analyzing a document for computing local syntax/semantic diagnostics.
Gets the corresponding to the .
NOTE: Throws an exception if is not an .
IDE-only document based diagnostic analyzer.
it is not allowed one to implement both DocumentDiagnosticAnalzyer and DiagnosticAnalyzer
This lets vsix installed or to
specify priority of the analyzer. Regular always comes before those 2 different types.
Priority is ascending order and this only works on HostDiagnosticAnalyzer meaning Vsix installed analyzers in VS.
This is to support partner teams (such as typescript and F#) who want to order their analyzer's execution order.
Cache of a to its . We cache this as the latter
computes and allocates expensively every time it is called.
Filters out the diagnostics with the specified .
If is non-null, filters out diagnostics with location outside this span.
Calculates a checksum that contains a project's checksum along with a checksum for each of the project's
transitive dependencies.
This checksum calculation can be used for cases where a feature needs to know if the semantics in this project
changed. For example, for diagnostics or caching computed semantic data. The goal is to ensure that changes to
- Files inside the current project
- Project properties of the current project
- Visible files in referenced projects
- Project properties in referenced projects
are reflected in the metadata we keep so that comparing solutions accurately tells us when we need to recompute
semantic work.
This method of checking for changes has a few important properties that differentiate it from other methods of determining project version.
- Changes to methods inside the current project will be reflected to compute updated diagnostics.
does not change as it only returns top level changes.
- Reloading a project without making any changes will re-use cached diagnostics.
changes as the project is removed, then added resulting in a version change.
This checksum is also affected by the for this project.
As such, it is not usable across different sessions of a particular host.
A dummy singleton analyzer. Its only purpose is to represent file content load failures in maps that are keyed by .
A placeholder singleton analyzer. Its only purpose is to represent generator-produced diagnostics in maps that are keyed by .
Key is .
We use the key to de-duplicate analyzer references if they are referenced from multiple places.
Key is the language the supports and key for the second map is analyzer reference identity and
for that assembly reference.
Entry will be lazily filled in.
Key is .
Value is set of that belong to the .
We populate it lazily. otherwise, we will bring in all analyzers preemptively
Maps to compiler diagnostic analyzers.
Maps list of analyzer references and to .
TODO: https://github.com/dotnet/roslyn/issues/42848
It is quite common for multiple projects to have the same set of analyzer references, yet we will create
multiple instances of the analyzer list and thus not share the info.
List of host s
Get identity and s map for given
Create identity and s map for given that
includes both host and project analyzers
Create identity and s map for given that
has only project analyzers
Return compiler for the given language.
Given the original location of the diagnostic and the mapped line info based on line directives in source,
apply any necessary adjustments to these diagnostic spans and returns the effective source span for the diagnostic.
For example, for Venus, we might change the mapped location to be the location in the primary buffer.
Additionally, if the secondary buffer location is outside visible user code, then the original location is also adjusted to be within visible user code.
IDE-only project based diagnostic analyzer.
it is not allowed one to implement both ProjectDiagnosticAnalzyer and DiagnosticAnalyzer
This lets vsix installed or to
specify priority of the analyzer. Regular always comes before those 2 different types.
Priority is ascending order and this only works on HostDiagnosticAnalyzer meaning Vsix installed analyzers in VS.
This is to support partner teams (such as typescript and F#) who want to order their analyzer's execution order.
Information about analyzers supplied by the host (IDE), which can be completely skipped or its diagnostics partially filtered for the corresponding project
as project analyzer reference (from NuGet) has equivalent analyzer(s) reporting all or subset of diagnostic IDs reported by these analyzers.
Analyzers supplied by the host (IDE), which can be completely skipped for the corresponding project
as project analyzer reference has equivalent analyzer(s) reporting all diagnostic IDs reported by these analyzers.
Analyzer to diagnostic ID map, such that the diagnostics of those IDs reported by the analyzer should be filtered
for a correspndiong project.
This includes the analyzers supplied by the host (IDE), such that project's analyzer references (from NuGet)
has equivalent analyzer(s) reporting subset of diagnostic IDs reported by these analyzers.
Predefined name of diagnostic property which shows in what compilation stage the diagnostic is created.
An implementation of for the compiler that wraps a .
Create a from a .
An implementation of for the compiler that wraps a .
Create a from a .
Resolved path of the document.
Retrieves a with the contents of this file.
that memoize structured (parsed) form of certain complex options to avoid parsing them multiple times.
Storages of these complex options may directly call the specialized getters to reuse the cached values.
Returns the equivalent for a value.
The value.
The equivalent for the value.
If is not one of the expected values.
Returns the equivalent for a value.
The value.
The equivalent for a value; otherwise,
if does not contain a direct equivalent for
.
If is not one of the expected values.
Applies a default severity to a value.
The value.
The default severity.
If is , returns
.
-or-
Otherwise, returns if it has a non-default value.
Each word is capitalized
Every word except the first word is capitalized
Only the first word is capitalized
Every character is capitalized
No characters are capitalized
Determines if matches a subset of the symbols matched by . The
implementation determines which properties of are considered for this
evaluation. The subset relation does not necessarily indicate a proper subset.
The first naming rule.
The second naming rule.
if matches a subset of the symbols matched by
on some implementation-defined properties; otherwise, .
This does not handle the case where a method in a base type implicitly implements an
interface method on behalf of one of its derived types.
Contains all information related to Naming Style Preferences.
1. Symbol Specifications
2. Name Style
3. Naming Rule (points to Symbol Specification IDs)
Invalid value, analyzer must support at least one or more of the subsequent analysis categories.
Analyzer reports syntax diagnostics (i.e. registers a SyntaxTree action).
Note: an that uses this will not work properly if
it registers a and then ends
up needing to use the . If a
is needed, use or
.
Analyzer reports semantic diagnostics and also supports incremental span based method body analysis.
An analyzer can support incremental method body analysis if edits within a method body only affect the diagnostics reported by the analyzer on the edited method body.
Analyzer reports semantic diagnostics but doesn't support incremental span based method body analysis.
It needs to re-analyze the whole document for reporting semantic diagnostics even for method body editing scenarios.
This interface is a marker for all the analyzers that are built in.
We will record non-fatal-watson if any analyzer with this interface throws an exception.
also, built in analyzer can do things that third-party analyzer (command line analyzer) can't do
such as reporting all diagnostic descriptors as hidden when it can return different severity on runtime.
or reporting diagnostics ID that is not reported by SupportedDiagnostics.
this interface is used by the engine to allow this special behavior over command line analyzers.
This category will be used to run analyzer more efficiently by restricting scope of analysis
If this analyzer is privileged and should run with higher priority than other analyzers.
Special IDE analyzer to flag unnecessary inline source suppressions,
i.e. pragma and local SuppressMessageAttribute suppressions.
Analyzes the tree, with an optional span scope, and report unnecessary inline suppressions.
This holds onto diagnostics for a specific version of project snapshot
in a way each kind of diagnostics can be queried fast.
The set of documents that has any kind of diagnostics on it.
Syntax diagnostics from this file.
Semantic diagnostics from this file.
Diagnostics that were produced for these documents, but came from the analysis of other files.
Diagnostics that don't have locations.
We have this builder to avoid creating collections unnecessarily.
Expectation is that, most of time, most of analyzers doesn't have any diagnostics. so no need to actually create any objects.
We have this builder to avoid creating collections unnecessarily.
Expectation is that, most of time, most of analyzers doesn't have any diagnostics. so no need to actually create any objects.
Basically typed tuple.
Any MEF component implementing this interface will be used to redirect analyzer assemblies.
The redirected path is passed to the compiler where it is processed in the standard way,
e.g., the redirected assembly is shadow copied before it's loaded
(this could be improved in the future since shadow copying redirected assemblies is usually unnecessary).
Original full path of the analyzer assembly.
The redirected full path of the analyzer assembly
or if this instance cannot redirect the given assembly.
If two redirectors return different paths for the same assembly, no redirection will be performed.
No thread switching inside this method is allowed.
An interface implemented by hosts to provide the host-level analyzers; for example in Visual Studio for Windows this
is where we'll fetch VSIX-defined analyzers.
Gets the path to any assemblies that represent the closure of razor compiler.
Helper class to manage collections of source-file like things; this exists just to avoid duplicating all the logic for regular source files
and additional files.
This class should be free-threaded, and any synchronization is done via .
This class is otherwise free to operate on private members of if needed.
The map of file paths to the underlying . This document may exist in or has been
pushed to the actual workspace.
A map of explicitly-added "always open" and their associated . This does not contain
any regular files that have been open.
The map of to whose got added into
The current list of documents that are to be added in this batch.
The current list of documents that are being removed in this batch. Once the document is in this list, it is no longer in .
The current list of document file paths that will be ordered in a batch.
Process file content changes
filepath given from project system
filepath used in workspace. it might be different than projectSystemFilePath
Updates the solution for a set of batch changes.
While it is OK for this method to *read* local state, it cannot *modify* it as this may
be called multiple times (when the workspace update fails due to interceding updates).
A semaphore taken for all mutation of any mutable field in this type.
This is, for now, intentionally pessimistic. There are no doubt ways that we could allow more to run in
parallel, but the current tradeoff is for simplicity of code and "obvious correctness" than something that is
subtle, fast, and wrong.
The number of active batch scopes. If this is zero, we are not batching, non-zero means we are batching.
The set of actual analyzer reference paths that the project knows about.
The set of SDK code style analyzer reference paths that the project knows about.
Paths to analyzers we want to add when the current batch completes.
Paths to analzyers we want to remove when the current batch completes.
If this project is the 'primary' project the project system cares about for a group of Roslyn projects that
correspond to different configurations of a single project system project. by
default.
The full list of all metadata references this project has. References that have internally been converted to project references
will still be in this.
The file watching tokens for the documents in this project. We get the tokens even when we're in a batch, so the files here
may not be in the actual workspace yet.
A file change context used to watch source files, additional files, and analyzer config files for this project. It's automatically set to watch the user's project
directory so we avoid file-by-file watching.
track whether we have been subscribed to event
Map of the original dynamic file path to the that was associated with it.
For example, the key is something like Page.cshtml which is given to us from the project system calling
. The value of the map is a generated file that
corresponds to the original path, say Page.g.cs. If we were given a file by the project system but no
provided a file for it, we will record the value as null so we still can track
the addition of the .cshtml file for a later call to .
The workspace snapshot will only have a document with (the value) but not the
original dynamic file path (the key).
We use the same string comparer as in the used by _sourceFiles, below, as these
files are added to that collection too.
Reports a telemetry event if compilation information is being thrown away after being previously computed
The path to the output in obj.
The path to the source generated files.
The default namespace of the project.
In C#, this is defined as the value of "rootnamespace" msbuild property. Right now VB doesn't
have the concept of "default namespace", but we conjure one in workspace by assigning the value
of the project's root namespace to it. So various features can choose to use it for their own purpose.
In the future, we might consider officially exposing "default namespace" for VB project
(e.g.through a "defaultnamespace" msbuild property)
The max language version supported for this project, if applicable. Useful to help indicate what
language version features should be suggested to a user, as well as if they can be upgraded.
Flag to control if this has already been disposed. Not a boolean only so it can be used with Interlocked.CompareExchange.
Adds a source file to the project from a text container (eg, a Visual Studio Text buffer)
The text container that contains this file.
The file path of the document.
The kind of the source code.
The names of the logical nested folders the document is contained in.
Whether the document is used only for design time (eg. completion) or also included in a compilation.
A associated with this document
Returns the properties being used for the current metadata reference added to this project. May return multiple properties if
the reference has been added multiple times with different properties.
Clears a list and zeros out the capacity. The lists we use for batching are likely to get large during an initial load, but after
that point should never get that large again.
Clears a list and zeros out the capacity. The lists we use for batching are likely to get large during an initial load, but after
that point should never get that large again.
The main gate to synchronize updates to this solution.
See the Readme.md in this directory for further comments about threading in this area.
Stores the latest state of the project system factory.
Access to this is synchronized via
A set of documents that were added by , and aren't otherwise
tracked for opening/closing.
Should be updated with .
Should be updated with .
Set by the host if the solution is currently closing; this can be used to optimize some things there.
The current path to the solution. Currently this is only used to update the solution path when the first project is added -- we don't have a concept
of the solution path changing in the middle while a bunch of projects are loaded.
Applies a single operation to the workspace. should be a call to one of the protected Workspace.On* methods.
Applies a single operation to the workspace. should be a call to one of the protected Workspace.On* methods.
Applies a single operation to the workspace. should be a call to one of the protected Workspace.On* methods.
Applies a single operation to the workspace that also needs to update the .
should be a call to one of the protected Workspace.On* methods.
Applies a solution transformation to the workspace and triggers workspace changed event for specified .
The transformation shall only update the project of the solution with the specified .
The function must be safe to be attempted multiple times (and not update local state).
Applies a change to the workspace that can do any number of project changes.
The mutation action must be safe to attempt multiple times, in case there are interceding solution changes.
If outside changes need to run under the global lock and run only once, they should use the action.
will always run even if the transformation applied no changes.
This is needed to synchronize with to avoid any races. This
method could be moved down to the core Workspace layer and then could use the synchronization lock there.
Removes the project from the various maps this type maintains; it's still up to the caller to actually remove
the project in one way or another.
Attempts to convert all metadata references to to a project reference to .
The of the project that could be referenced in place
of the output path.
The output path to replace.
Finds all projects that had a project reference to and convert it back to a metadata reference.
The of the project being referenced.
The output path of the given project to remove the link to.
Converts a metadata reference to a project reference if possible.
This must be safe to run multiple times for the same reference as it is called
during a workspace update (which will attempt to apply the update multiple times).
Tries to convert a metadata reference to remove to a project reference.
Gets or creates a PortableExecutableReference instance for the given file path and properties.
Calls to this are expected to be serialized by the caller.
Core helper that handles refreshing the references we have for a particular or .
Immutable data type that holds the current state of the project system factory as well as storing any
incremental state changes in the current workspace update.
This state is updated by various project system update operations under the . Importantly,
this immutable type allows us to discard updates to the state that fail to apply due to interceding workspace
operations.
There are two kinds of state that this type holds that need to support discarding:
- Global state for the (various maps of project information). This
state must be saved between different changes.
- Incremental state for the current change being processed. This state has information that is cannot be
resilient to being applied multiple times during the workspace update, so is saved to be applied only once the
workspace update is successful.
Global state representing a multimap from an output path to the project outputting to it. Ideally, this
shouldn't ever actually be a true multimap, since we shouldn't have two projects outputting to the same path,
but any bug by a project adding the wrong output path means we could end up with some duplication. In that case,
we'll temporarily have two until (hopefully) somebody removes it.
Global state containing output paths and converted project reference information for each project.
Incremental state containing metadata references removed in the current update.
Incremental state containing metadata references added in the current update.
Incremental state containing analyzer references removed in the current update.
Incremental state containing analyzer references added in the current update.
Immutable data type that holds the current state of the project system factory as well as storing any
incremental state changes in the current workspace update.
This state is updated by various project system update operations under the . Importantly,
this immutable type allows us to discard updates to the state that fail to apply due to interceding workspace
operations.
There are two kinds of state that this type holds that need to support discarding:
- Global state for the (various maps of project information). This
state must be saved between different changes.
- Incremental state for the current change being processed. This state has information that is cannot be
resilient to being applied multiple times during the workspace update, so is saved to be applied only once the
workspace update is successful.
Global state representing a multimap from an output path to the project outputting to it. Ideally, this
shouldn't ever actually be a true multimap, since we shouldn't have two projects outputting to the same path,
but any bug by a project adding the wrong output path means we could end up with some duplication. In that case,
we'll temporarily have two until (hopefully) somebody removes it.
Global state containing output paths and converted project reference information for each project.
Incremental state containing metadata references removed in the current update.
Incremental state containing metadata references added in the current update.
Incremental state containing analyzer references removed in the current update.
Incremental state containing analyzer references added in the current update.
Global state representing a multimap from an output path to the project outputting to it. Ideally, this
shouldn't ever actually be a true multimap, since we shouldn't have two projects outputting to the same path,
but any bug by a project adding the wrong output path means we could end up with some duplication. In that case,
we'll temporarily have two until (hopefully) somebody removes it.
Global state containing output paths and converted project reference information for each project.
Incremental state containing metadata references removed in the current update.
Incremental state containing metadata references added in the current update.
Incremental state containing analyzer references removed in the current update.
Incremental state containing analyzer references added in the current update.
Returns a new instance with any incremental state that should not be saved between updates cleared.
Gate to guard all mutable fields in this class.
The lock hierarchy means you are allowed to call out of this class and into while holding the lock.
A hashed checksum of the last command line we were set to. We use this
as a low cost (in terms of memory) way to determine if the command line
actually changes and we need to make any downstream updates.
To save space in the managed heap, we dump the entire command-line string to our
temp-storage-service. This is helpful as compiler command-lines can grow extremely large
(especially in cases with many references).
Note: this will be null in the case that the command line is an empty array.
if the command line was updated.
Returns the active path to the rule set file that is being used by this project, or null if there isn't a rule set file.
Returns the parsed command line arguments for the arguments set with .
Overridden by derived classes to provide a hook to modify a with any host-provided values that didn't come from
the command line string.
Override by derived classes to provide a hook to modify a with any host-provided values that didn't come from
the command line string.
Called by a derived class to notify that we need to update the settings in the project system for something that will be provided
by either or .
A little helper type to hold onto the being updated in a batch, which also
keeps track of the right to raise when we are done.
A little helper type to hold onto the being updated in a batch, which also
keeps track of the right to raise when we are done.
The kind that encompasses all the changes we've made. It's null if no changes have been made,
and or
if we can't give a more precise type.
The same as but also records
the removed documents into .
Should be called to update the solution if there isn't a specific document change kind that should be
given to
Calculates distance of two nodes based on their significant parts.
Returns false if the nodes don't have any significant parts and should be compared as a whole.
Represents an edit operation on a tree or a sequence of nodes.
Tree node.
Insert:
default(TNode).
Delete:
Deleted node.
Move, Update:
Node in the old tree/sequence.
Insert:
Inserted node.
Delete:
default(TNode)
Move, Update:
Node in the new tree/sequence.
No change.
Node value was updated.
Node was inserted.
Node was deleted.
Node changed parent.
Node changed position within its parent. The parent nodes of the old node and the new node are matching.
Represents a sequence of tree edits.
Calculates Longest Common Subsequence for immutable arrays.
Limit the number of tokens used to compute distance between sequences of tokens so that
we always use the pooled buffers. The combined length of the two sequences being compared
must be less than .
Underlying storage for s allocated on .
The LCS algorithm allocates s of sizes (3, 2*1 + 1, ..., 2*D + 1), always in this order,
where D is at most the sum of lengths of the compared sequences.
The arrays get pushed on a stack as they are built up, then all consumed in the reverse order (stack pop).
Since the exact length of each array in the above sequence is known we avoid allocating each individual array.
Instead we allocate a large buffer serving as a a backing storage of a contiguous sequence of arrays
corresponding to stack depths to .
If more storage is needed we chain next large buffer to the previous one in a linked list.
We pool a few of these linked buffers on to conserve allocations.
The max stack depth backed by the fist buffer.
Size of the buffer for 100 is ~10K.
For 150 it'd be 91KB, which would be allocated on LOH.
The buffers grow by factor of , so the next buffer will be allocated on LOH.
Do not expand pooled buffers to more than ~12 MB total size (sum of all linked segment sizes).
This threshold is achieved when is greater than = sqrt(size_limit / sizeof(int)).
Calculates Longest Common Subsequence.
Returns a distance [0..1] of the specified sequences.
The smaller distance the more similar the sequences are.
Returns a distance [0..1] of the specified sequences.
The smaller distance the more similar the sequences are.
Calculates a list of "V arrays" using Eugene W. Myers O(ND) Difference Algorithm
The algorithm was inspired by Myers' Diff Algorithm described in an article by Nicolas Butler:
https://www.codeproject.com/articles/42279/investigating-myers-diff-algorithm-part-of
The author has approved the use of his code from the article under the Apache 2.0 license.
The algorithm works on an imaginary edit graph for A and B which has a vertex at each point in the grid(i, j), i in [0, lengthA] and j in [0, lengthB].
The vertices of the edit graph are connected by horizontal, vertical, and diagonal directed edges to form a directed acyclic graph.
Horizontal edges connect each vertex to its right neighbor.
Vertical edges connect each vertex to the neighbor below it.
Diagonal edges connect vertex (i,j) to vertex (i-1,j-1) if (sequenceA[i-1],sequenceB[j-1]) is true.
Move right along horizontal edge (i-1,j)-(i,j) represents a delete of sequenceA[i-1].
Move down along vertical edge (i,j-1)-(i,j) represents an insert of sequenceB[j-1].
Move along diagonal edge (i-1,j-1)-(i,j) represents an match of sequenceA[i-1] to sequenceB[j-1].
The number of diagonal edges on the path from (0,0) to (lengthA, lengthB) is the length of the longest common sub.
The function does not actually allocate this graph. Instead it uses Eugene W. Myers' O(ND) Difference Algoritm to calculate a list of "V arrays" and returns it in a Stack.
A "V array" is a list of end points of so called "snakes".
A "snake" is a path with a single horizontal (delete) or vertical (insert) move followed by 0 or more diagonals (matching pairs).
Unlike the algorithm in the article this implementation stores 'y' indexes and prefers 'right' moves instead of 'down' moves in ambiguous situations
to preserve the behavior of the original diff algorithm (deletes first, inserts after).
The number of items in the list is the length of the shortest edit script = the number of inserts/edits between the two sequences = D.
The list can be used to determine the matching pairs in the sequences (GetMatchingPairs method) or the full editing script (GetEdits method).
The algorithm uses O(ND) time and memory where D is the number of delete/inserts and N is the sum of lengths of the two sequences.
VArrays store just the y index because x can be calculated: x = y + k.
Calculates longest common substring using Wagner algorithm.
Returns an edit script (a sequence of edits) that transform subtree
to subtree.
Returns an edit script (a sequence of edits) that transform a sequence of nodes
to a sequence of nodes .
or is a null reference.
Represents an edit operation on a sequence of values.
The kind of edit: , , or .
Index in the old sequence, or -1 if the edit is insert.
Index in the new sequence, or -1 if the edit is delete.
Implements a tree differencing algorithm.
Subclasses define relationships among tree nodes, and parameters to the differencing algorithm.
Tree node.
Returns an edit script that transforms to .
Returns a match map of descendants to descendants.
Calculates the distance [0..1] of two nodes.
The more similar the nodes the smaller the distance.
Used to determine whether two nodes of the same label match.
Even if 0 is returned the nodes might be slightly different.
Returns true if the specified nodes have equal values.
Called with matching nodes (, ).
Return true if the values of the nodes are the same, or their difference is not important.
The number of distinct labels used in the tree.
Returns an integer label corresponding to the given node.
Returned value must be within [0, LabelCount).
Returns N > 0 if the node with specified label can't change its N-th ancestor node, zero otherwise.
1st ancestor is the node's parent node.
2nd ancestor is the node's grandparent node.
etc.
May return null if the is a leaf.
Enumerates all descendant nodes of the given node in depth-first prefix order.
Returns a parent for the specified node.
Return true if specified nodes belong to the same tree.
Returns the position of the node.
This should contain only language-agnostic declarations. Things like record struct should fall under struct, etc.
Represents a class declaration, including record class declarations in C#.
Represents a struct declaration, including record struct declarations in C#.
Represents set accessor declaration of a property, including init accessors in C#.
An editor for making changes to a document's syntax tree.
Creates a new instance.
The specified when the editor was first created.
The of the original document.
Returns the changed .
Adds namespace imports / using directives for namespace references found in the document.
Adds namespace imports / using directives for namespace references found in the document within the span specified.
Adds namespace imports / using directives for namespace references found in the document within the sub-trees annotated with the .
Adds namespace imports / using directives for namespace references found in the document within the spans specified.
Adds namespace imports / using directives for namespace references found in the document.
Adds namespace imports / using directives for namespace references found in the document within the sub-trees annotated with the .
Adds namespace imports / using directives for namespace references found in the document within the spans specified.
Adds namespace imports / using directives for namespace references found in the document.
Adds namespace imports / using directives for namespace references found in the document within the sub-trees annotated with the .
The name assigned to an implicit (widening) conversion.
The name assigned to an explicit (narrowing) conversion.
The name assigned to the Addition operator.
The name assigned to the BitwiseAnd operator.
The name assigned to the BitwiseOr operator.
The name assigned to the Decrement operator.
The name assigned to the Division operator.
The name assigned to the Equality operator.
The name assigned to the ExclusiveOr operator.
The name assigned to the False operator.
The name assigned to the GreaterThan operator.
The name assigned to the GreaterThanOrEqual operator.
The name assigned to the Increment operator.
The name assigned to the Inequality operator.
The name assigned to the LeftShift operator.
The name assigned to the LessThan operator.
The name assigned to the LessThanOrEqual operator.
The name assigned to the LogicalNot operator.
The name assigned to the Modulus operator.
The name assigned to the Multiply operator.
The name assigned to the OnesComplement operator.
The name assigned to the RightShift operator.
The name assigned to the Subtraction operator.
The name assigned to the True operator.
The name assigned to the UnaryNegation operator.
The name assigned to the UnaryPlus operator.
The name assigned to the UnsignedRightShift operator.
An editor for making changes to multiple documents in a solution.
An editor for making changes to multiple documents in a solution.
The that was specified when the was constructed.
Gets the for the corresponding .
Returns the changed .
An editor for making changes to symbol source declarations.
Creates a new instance.
Creates a new instance.
The original solution.
The solution with the edits applied.
The documents changed since the was constructed.
Gets the current symbol for a source symbol.
Gets the current declarations for the specified symbol.
Gets the declaration syntax nodes for a given symbol.
Gets the best declaration node for adding members.
An action that make changes to a declaration node within a .
The to apply edits to.
The declaration to edit.
An action that make changes to a declaration node within a .
The to apply edits to.
The declaration to edit.
A cancellation token.
Enables editing the definition of one of the symbol's declarations.
Partial types and methods may have more than one declaration.
The symbol to edit.
The action that makes edits to the declaration.
An optional .
The new symbol including the changes.
Enables editing the definition of one of the symbol's declarations.
Partial types and methods may have more than one declaration.
The symbol to edit.
The action that makes edits to the declaration.
An optional .
The new symbol including the changes.
Enables editing the definition of one of the symbol's declarations.
Partial types and methods may have more than one declaration.
The symbol to edit.
A location within one of the symbol's declarations.
The action that makes edits to the declaration.
An optional .
The new symbol including the changes.
Enables editing the definition of one of the symbol's declarations.
Partial types and methods may have more than one declaration.
The symbol to edit.
A location within one of the symbol's declarations.
The action that makes edits to the declaration.
An optional .
The new symbol including the changes.
Enables editing the symbol's declaration where the member is also declared.
Partial types and methods may have more than one declaration.
The symbol to edit.
A symbol whose declaration is contained within one of the primary symbol's declarations.
The action that makes edits to the declaration.
An optional .
The new symbol including the changes.
Enables editing the symbol's declaration where the member is also declared.
Partial types and methods may have more than one declaration.
The symbol to edit.
A symbol whose declaration is contained within one of the primary symbol's declarations.
The action that makes edits to the declaration.
An optional .
The new symbol including the changes.
Enables editing all the symbol's declarations.
Partial types and methods may have more than one declaration.
The symbol to be edited.
The action that makes edits to the declaration.
An optional .
The new symbol including the changes.
Enables editing all the symbol's declarations.
Partial types and methods may have more than one declaration.
The symbol to be edited.
The action that makes edits to the declaration.
An optional .
The new symbol including the changes.
Gets the reference to the declaration of the base or interface type as part of the symbol's declaration.
Changes the base type of the symbol.
Changes the base type of the symbol.
An editor for making changes to a syntax tree. The editor works by giving a list of changes to perform to a
particular tree in order. Changes are given a they will apply to in the
original tree the editor is created for. The semantics of application are as follows:
-
The original root provided is used as the 'current' root for all operations. This 'current' root will
continually be updated, becoming the new 'current' root. The original root is never changed.
-
Each change has its given tracked, using a , producing a
'current' root that tracks all of them. This allows that same node to be found after prior changes are applied
which mutate the tree.
-
Each change is then applied in order it was added to the editor.
-
A change first attempts to find its in the 'current' root. If that node cannot be
found, the operation will fail with an .
-
The particular change will run on that node, removing, replacing, or inserting around it according to the
change. If the change is passed a delegate as its 'compute' argument, it will be given the found in the current root. The 'current' root will then be updated by replacing the current
node with the new computed node.
-
The 'current' root is then returned.
The above editing strategy makes it an error for a client of the editor to add a change that updates a parent
node and then adds a change that updates a child node (unless the parent change is certain to contain the
child), and attempting this will throw at runtime. If a client ever needs to update both a child and a parent,
it should add the child change first, and then the parent change. And the parent change should pass an
appropriate 'compute' callback so it will see the results of the child change.
If a client wants to make a replacement, then find the value put into
the tree, that can be done by adding a dedicated annotation to that node and then looking it back up in the
'current' node passed to a 'compute' callback.
Creates a new instance.
Creates a new instance.
Creates a new instance.
The that was specified when the was constructed.
A to use to create and change 's.
Returns the changed root node.
Makes sure the node is tracked, even if it is not changed.
Remove the node from the tree.
The node to remove that currently exists as part of the tree.
Remove the node from the tree.
The node to remove that currently exists as part of the tree.
Options that affect how node removal works.
Replace the specified node with a node produced by the function.
The node to replace that already exists in the tree.
A function that computes a replacement node.
The node passed into the compute function includes changes from prior edits. It will not appear as a descendant of the original root.
Replace the specified node with a different node.
The node to replace that already exists in the tree.
The new node that will be placed into the tree in the existing node's location.
Insert the new nodes before the specified node already existing in the tree.
The node already existing in the tree that the new nodes will be placed before. This must be a node this is contained within a syntax list.
The nodes to place before the existing node. These nodes must be of a compatible type to be placed in the same list containing the existing node.
Insert the new node before the specified node already existing in the tree.
The node already existing in the tree that the new nodes will be placed before. This must be a node this is contained within a syntax list.
The node to place before the existing node. This node must be of a compatible type to be placed in the same list containing the existing node.
Insert the new nodes after the specified node already existing in the tree.
The node already existing in the tree that the new nodes will be placed after. This must be a node this is contained within a syntax list.
The nodes to place after the existing node. These nodes must be of a compatible type to be placed in the same list containing the existing node.
Insert the new node after the specified node already existing in the tree.
The node already existing in the tree that the new nodes will be placed after. This must be a node this is contained within a syntax list.
The node to place after the existing node. This node must be of a compatible type to be placed in the same list containing the existing node.
A language agnostic factory for creating syntax nodes.
This API can be used to create language specific syntax nodes that are semantically
similar between languages.
The trees generated by this API will try to respect user preferences when
possible. For example, generating
will be done in a way such that "this." or "Me." will be simplified according to user
preference if is used.
Gets the for the specified language.
Gets the for the specified language.
Gets the for the language corresponding to the document.
Gets the for the language corresponding to the project.
Returns the node if it is a declaration, the immediate enclosing declaration if one exists, or null.
Returns the enclosing declaration of the specified kind or null.
Creates a field declaration.
Creates a field declaration matching an existing field symbol.
Creates a field declaration matching an existing field symbol.
Creates a method declaration.
Creates a method declaration matching an existing method symbol.
Creates a method declaration.
Creates a operator or conversion declaration matching an existing method symbol.
Creates a parameter declaration.
Creates a parameter declaration matching an existing parameter symbol.
Creates a property declaration. The property will have a get accessor if
is and will have
a set accessor if is .
In C# there is a distinction between passing in for or versus
passing in an empty list. will produce an auto-property-accessor
(i.e. get;) whereas an empty list will produce an accessor with an empty block
(i.e. get { }).
Creates a property declaration using an existing property symbol as a signature.
Creates an indexer declaration.
Creates an indexer declaration matching an existing indexer symbol.
Creates a statement that adds the given handler to the given event.
Creates a statement that removes the given handler from the given event.
Creates an event declaration.
Creates an event declaration from an existing event symbol
Creates a custom event declaration.
Creates a custom event declaration from an existing event symbol.
Creates a constructor declaration.
Create a constructor declaration using
Converts method, property and indexer declarations into public interface implementations.
This is equivalent to an implicit C# interface implementation (you can access it via the interface or directly via the named member.)
Converts method, property and indexer declarations into public interface implementations.
This is equivalent to an implicit C# interface implementation (you can access it via the interface or directly via the named member.)
Converts method, property and indexer declarations into private interface implementations.
This is equivalent to a C# explicit interface implementation (you can declare it for access via the interface, but cannot call it directly).
Converts method, property and indexer declarations into private interface implementations.
This is equivalent to a C# explicit interface implementation (you can declare it for access via the interface, but cannot call it directly).
Creates a class declaration.
Creates a struct declaration.
Creates a interface declaration.
Creates an enum declaration.
Creates an enum declaration
Creates an enum member
Creates a delegate declaration.
Creates a declaration matching an existing symbol.
Converts a declaration (method, class, etc) into a declaration with type parameters.
Converts a declaration (method, class, etc) into a declaration with type parameters.
Adds a type constraint to a type parameter of a declaration.
Adds a type constraint to a type parameter of a declaration.
Adds a type constraint to a type parameter of a declaration.
Creates a namespace declaration.
The name of the namespace.
Zero or more namespace or type declarations.
Creates a namespace declaration.
The name of the namespace.
Zero or more namespace or type declarations.
Creates a namespace declaration.
The name of the namespace.
Zero or more namespace or type declarations.
Creates a namespace declaration.
The name of the namespace.
Zero or more namespace or type declarations.
Creates a compilation unit declaration
Zero or more namespace import, namespace or type declarations.
Creates a compilation unit declaration
Zero or more namespace import, namespace or type declarations.
Creates a namespace import declaration.
The name of the namespace being imported.
Creates a namespace import declaration.
The name of the namespace being imported.
Creates an alias import declaration.
The name of the alias.
The namespace or type to be aliased.
Creates an alias import declaration.
The name of the alias.
The namespace or type to be aliased.
Creates an attribute.
Creates an attribute.
Creates an attribute.
Creates an attribute matching existing attribute data.
Creates an attribute argument.
Creates an attribute argument.
Removes all attributes from the declaration, including return attributes.
Removes comments from leading and trailing trivia, as well
as potentially removing comments from opening and closing tokens.
Gets the attributes of a declaration, not including the return attributes.
Creates a new instance of the declaration with the attributes inserted.
Creates a new instance of the declaration with the attributes inserted.
Creates a new instance of a declaration with the specified attributes added.
Creates a new instance of a declaration with the specified attributes added.
Gets the return attributes from the declaration.
Creates a new instance of a method declaration with return attributes inserted.
Creates a new instance of a method declaration with return attributes inserted.
Creates a new instance of a method declaration with return attributes added.
Creates a new instance of a method declaration node with return attributes added.
Gets the attribute arguments for the attribute declaration.
Creates a new instance of the attribute with the arguments inserted.
Creates a new instance of the attribute with the arguments added.
Gets the namespace imports that are part of the declaration.
Creates a new instance of the declaration with the namespace imports inserted.
Creates a new instance of the declaration with the namespace imports inserted.
Creates a new instance of the declaration with the namespace imports added.
Creates a new instance of the declaration with the namespace imports added.
Gets the current members of the declaration.
Creates a new instance of the declaration with the members inserted.
Creates a new instance of the declaration with the members inserted.
Creates a new instance of the declaration with the members added to the end.
Creates a new instance of the declaration with the members added to the end.
Gets the accessibility of the declaration.
Changes the accessibility of the declaration.
Gets the for the declaration.
Changes the for the declaration.
Gets the for the declaration.
Gets the name of the declaration.
Changes the name of the declaration.
Gets the type of the declaration.
Changes the type of the declaration.
Gets the list of parameters for the declaration.
Inserts the parameters at the specified index into the declaration.
Adds the parameters to the declaration.
Gets the list of switch sections for the statement.
Inserts the switch sections at the specified index into the statement.
Adds the switch sections to the statement.
Gets the expression associated with the declaration.
Changes the expression associated with the declaration.
Gets the statements for the body of the declaration.
Changes the statements for the body of the declaration.
Gets the accessors for the declaration.
Gets the accessor of the specified kind for the declaration.
Creates a new instance of the declaration with the accessors inserted.
Creates a new instance of the declaration with the accessors added.
Gets the statements for the body of the get-accessor of the declaration.
Changes the statements for the body of the get-accessor of the declaration.
Gets the statements for the body of the set-accessor of the declaration.
Changes the statements for the body of the set-accessor of the declaration.
Gets a list of the base and interface types for the declaration.
Adds a base type to the declaration
Adds an interface type to the declaration
Replaces the node in the root's tree with the new node.
Inserts the new node before the specified declaration.
Inserts the new node before the specified declaration.
Removes the node from the sub tree starting at the root.
Removes the node from the sub tree starting at the root.
Removes all the declarations from the sub tree starting at the root.
Creates a new instance of the node with the leading and trailing trivia removed and replaced with elastic markers.
Creates statement that allows an expression to execute in a statement context.
This is typically an invocation or assignment expression.
The expression that is to be executed. This is usually a method invocation expression.
Creates a statement that can be used to return a value from a method body.
An optional expression that can be returned.
Creates a statement that can be used to yield a value from an iterator method.
An expression that can be yielded.
Creates a statement that can be used to throw an exception.
An optional expression that can be thrown.
Creates an expression that can be used to throw an exception.
True if can be used
if the language requires a
(including ) to be stated when making a
.
if the language allows the type node to be entirely elided.
Creates a statement that declares a single local variable.
Creates a statement that declares a single local variable.
Creates a statement that declares a single local variable.
Creates a statement that declares a single local variable.
Creates an if-statement
A condition expression.
The statements that are executed if the condition is true.
The statements that are executed if the condition is false.
Creates an if statement
A condition expression.
The statements that are executed if the condition is true.
A single statement that is executed if the condition is false.
Creates a switch statement that branches to individual sections based on the value of the specified expression.
Creates a switch statement that branches to individual sections based on the value of the specified expression.
Creates a section for a switch statement.
Creates a single-case section a switch statement.
Creates a default section for a switch statement.
Create a statement that exits a switch statement and continues after it.
Creates a statement that represents a using-block pattern.
Creates a statement that represents a using-block pattern.
Creates a statement that represents a using-block pattern.
Creates a statement that represents a lock-block pattern.
Creates a try-catch or try-catch-finally statement.
Creates a try-catch or try-catch-finally statement.
Creates a try-finally statement.
Creates a catch-clause.
Creates a catch-clause.
Creates a while-loop statement
Creates a block of statements. Not supported in VB.
An expression that represents the default value of a type.
This is typically a null value for reference types or a zero-filled value for value types.
Creates an expression that denotes the containing method's this-parameter.
Creates an expression that denotes the containing method's base-parameter.
Creates a literal expression. This is typically numeric primitives, strings or chars.
Creates a literal expression. This is typically numeric primitives, strings or chars.
Creates an expression for a typed constant.
Creates an expression that denotes the boolean false literal.
Creates an expression that denotes the boolean true literal.
Creates an expression that denotes the null literal.
Creates an expression that denotes a simple identifier name.
Creates an expression that denotes a generic identifier name.
Creates an expression that denotes a generic identifier name.
Creates an expression that denotes a generic identifier name.
Creates an expression that denotes a generic identifier name.
Converts an expression that ends in a name into an expression that ends in a generic name.
If the expression already ends in a generic name, the new type arguments are used instead.
Converts an expression that ends in a name into an expression that ends in a generic name.
If the expression already ends in a generic name, the new type arguments are used instead.
Creates a name expression that denotes a qualified name.
The left operand can be any name expression.
The right operand can be either and identifier or generic name.
Returns a new name node qualified with the 'global' alias ('Global' in VB).
Creates a name expression from a dotted name string.
Creates a name that denotes a type or namespace.
The symbol to create a name for.
Creates an expression that denotes a type.
Creates an expression that denotes a type. If addImport is false,
adds a which will prevent any
imports or usings from being added for the type.
Creates an expression that denotes a special type name.
Creates an expression that denotes an array type.
Creates an expression that denotes a nullable type.
Creates an expression that denotes a tuple type.
Creates an expression that denotes a tuple type.
Creates an expression that denotes a tuple type.
Creates an expression that denotes a tuple element.
Creates an expression that denotes a tuple element.
Creates an expression that denotes an assignment from the right argument to left argument.
Creates an expression that denotes a value-type equality test operation.
Creates an expression that denotes a reference-type equality test operation.
Creates an expression that denotes a value-type inequality test operation.
Creates an expression that denotes a reference-type inequality test operation.
Creates an expression that denotes a less-than test operation.
Creates an expression that denotes a less-than-or-equal test operation.
Creates an expression that denotes a greater-than test operation.
Creates an expression that denotes a greater-than-or-equal test operation.
Creates an expression that denotes a unary negation operation.
Creates an expression that denotes an addition operation.
Creates an expression that denotes an subtraction operation.
Creates an expression that denotes a multiplication operation.
Creates an expression that denotes a division operation.
Creates an expression that denotes a modulo operation.
Creates an expression that denotes a bitwise-and operation.
Creates an expression that denotes a bitwise-or operation.
Creates an expression that denotes a bitwise-not operation
Creates an expression that denotes a logical-and operation.
Creates an expression that denotes a logical-or operation.
Creates an expression that denotes a logical not operation.
Creates an expression that denotes a conditional evaluation operation.
Creates an expression that denotes a conditional access operation. Use and to generate the argument.
Creates an expression that denotes a member binding operation.
Creates an expression that denotes an element binding operation.
Creates an expression that denotes an element binding operation.
Creates an expression that denotes a coalesce operation.
Creates a member access expression.
Creates a member access expression.
Creates an array creation expression for a single dimensional array of specified size.
Creates an array creation expression for a single dimensional array with specified initial element values.
Creates an object creation expression.
Creates an object creation expression.
Creates an object creation expression.
Creates an object creation expression.
Creates a invocation expression.
Creates a invocation expression
Creates a node that is an argument to an invocation.
Creates a node that is an argument to an invocation.
Creates a node that is an argument to an invocation.
Creates an expression that access an element of an array or indexer.
Creates an expression that access an element of an array or indexer.
Creates an expression that evaluates to the type at runtime.
Creates an expression that denotes an is-type-check operation.
Creates an expression that denotes an is-type-check operation.
Creates an expression that denotes an try-cast operation.
Creates an expression that denotes an try-cast operation.
Creates an expression that denotes a type cast operation.
Creates an expression that denotes a type cast operation.
Creates an expression that denotes a type conversion operation.
Creates an expression that denotes a type conversion operation.
Creates an expression that declares a value returning lambda expression.
Creates an expression that declares a void returning lambda expression
Creates an expression that declares a value returning lambda expression.
Creates an expression that declares a void returning lambda expression.
Creates an expression that declares a single parameter value returning lambda expression.
Creates an expression that declares a single parameter void returning lambda expression.
Creates an expression that declares a single parameter value returning lambda expression.
Creates an expression that declares a single parameter void returning lambda expression.
Creates an expression that declares a zero parameter value returning lambda expression.
Creates an expression that declares a zero parameter void returning lambda expression.
Creates an expression that declares a zero parameter value returning lambda expression.
Creates an expression that declares a zero parameter void returning lambda expression.
Creates a lambda parameter.
Creates a lambda parameter.
Creates an await expression.
Wraps with parens.
Creates an nameof expression.
Creates an tuple expression.
Parses an expression from string
Has the reference type constraint (i.e. 'class' constraint in C#)
Has the value type constraint (i.e. 'struct' constraint in C#)
Has the constructor constraint (i.e. 'new' constraint in C#)
Options that we expect the user to set in editorconfig.
Looks at the contents of the document for top level identifiers (or existing extension method calls), and
blocks off imports that could potentially bring in a name that would conflict with them.
is the node that the import will be added to. This will either be the
compilation-unit node, or one of the namespace-blocks in the file.
Checks if the namespace declaration is contained inside,
or any of its ancestor namespaces are the same as
Internal extensions to .
This interface is available in the shared CodeStyle and Workspaces layer to allow
sharing internal generator methods between them. Once the methods are ready to be
made public APIs, they can be moved to .
Creates a statement that declares a single local variable with an optional initializer.
Creates a statement that declares a single local variable.
Wraps with parens.
Creates a statement that can be used to yield a value from an iterator method.
An expression that can be yielded.
if the language requires a "TypeExpression"
(including ) to be stated when making a
.
if the language allows the type node to be entirely elided.
Produces an appropriate TypeSyntax for the given . The
flag controls how this should be created depending on if this node is intended for use in a type-only
context, or an expression-level context. In the former case, both C# and VB will create QualifiedNameSyntax
nodes for dotted type names, whereas in the latter case both languages will create MemberAccessExpressionSyntax
nodes. The final stringified result will be the same in both cases. However, the structure of the trees
will be substantively different, which can impact how the compilation layers analyze the tree and how
transformational passes affect it.
Passing in the right value for is necessary for correctness and for use
of compilation (and other) layers in a supported fashion. For example, if a QualifiedTypeSyntax is
sed in a place the compiler would have parsed out a MemberAccessExpression, then it is undefined behavior
what will happen if that tree is passed to any other components.
Name of the host to be used in error messages (e.g. "Visual Studio").
Show global error info.
this kind error info should be something that affects whole roslyn such as
background compilation is disabled due to memory issue and etc
Thrown when async code must cancel the current execution but does not have access to the of the passed to the code.
Should be used in very rare cases where the is out of our control (e.g. owned but not exposed by JSON RPC in certain call-back scenarios).
Set by the host to handle an error report; this may crash the process or report telemetry.
A handler that will not crash the process when called. Used when calling
Same as setting the Handler property except that it avoids the assert. This is useful in
test code which needs to verify the handler is called in specific cases and will continually
overwrite this value.
Copies the handler in this instance to the linked copy of this type in this other assembly.
This file is in linked into multiple layers, but we want to ensure that all layers have the same copy.
This lets us copy the handler in this instance into the same in another instance.
Use in an exception filter to report an error without catching the exception.
The error is reported by calling .
to avoid catching the exception.
Use in an exception filter to report an error (by calling ), unless the
operation has been cancelled. The exception is never caught.
to avoid catching the exception.
Use in an exception filter to report an error (by calling ), unless the
operation has been cancelled at the request of . The exception is
never caught.
Cancellable operations are only expected to throw if the
applicable indicates cancellation is requested by setting
. Unexpected cancellation, i.e. an
which occurs without
requesting cancellation, is treated as an error by this method.
This method does not require to match
, provided cancellation is expected per the previous
paragraph.
A which will have
set if cancellation is expected.
to avoid catching the exception.
Report an error.
Calls and doesn't pass the exception through (the method returns true).
This is generally expected to be used within an exception filter as that allows us to
capture data at the point the exception is thrown rather than when it is handled.
However, it can also be used outside of an exception filter. If the exception has not
already been thrown the method will throw and catch it itself to ensure we get a useful
stack trace.
True to catch the exception.
Use in an exception filter to report an error (by calling ) and catch
the exception, unless the operation was cancelled.
to catch the exception if the error was reported; otherwise,
to propagate the exception if the operation was cancelled.
Use in an exception filter to report an error (by calling ) and
catch the exception, unless the operation was cancelled at the request of
.
Cancellable operations are only expected to throw if the
applicable indicates cancellation is requested by setting
. Unexpected cancellation, i.e. an
which occurs without
requesting cancellation, is treated as an error by this method.
This method does not require to match
, provided cancellation is expected per the previous
paragraph.
A which will have
set if cancellation is expected.
to catch the exception if the error was reported; otherwise,
to propagate the exception if the operation was cancelled.
Used to report a non-fatal-watson (when possible) to report an exception. The exception is not caught. Does
nothing if no non-fatal error handler is registered. See the second argument to .
The severity of the error, see the enum members for a description of when to use each. This is metadata that's included
in a non-fatal fault report, which we can take advantage of on the backend to automatically triage bugs. For example,
a critical severity issue we can open with a lower bug count compared to a low priority one.
The severity hasn't been categorized. Don't use this in new code.
Something failed, but the user is unlikely to notice. Especially useful for background things that we can silently recover
from, like bugs in caching systems.
Something failed, and the user might notice, but they're still likely able to carry on. For example, if the user
asked for some information from the IDE (find references, completion, etc.) and we were able to give partial results.
Something failed, and the user likely noticed. For example, the user pressed a button to do an action, and
we threw an exception so we completely failed to do that in an unrecoverable way. This may also be used
for back-end systems where a failure is going to result in a highly broken experience, for example if parsing a file
catastrophically failed.
Returns to make it easy to use in an exception filter. Note: will be called with any
exception, so this should not do anything in the case of .
All options needed to perform method extraction.
Provides helper methods for finding dependent projects across a solution that a given symbol can be referenced within.
Cache from the for a particular to the
name of the defined by it.
This method computes the dependent projects that need to be searched for references of the given .
This computation depends on the given symbol's visibility:
- Public: Dependent projects include the symbol definition project and all the referencing
projects.
- Internal: Dependent projects include the symbol definition project and all the referencing projects
that have internals access to the definition project..
- Private: Dependent projects include the symbol definition project and all the referencing submission
projects (which are special and can reference private fields of the previous submission).
We perform this computation in two stages:
- Compute all the dependent projects (submission + non-submission) and their InternalsVisibleTo semantics to the definition project.
- Filter the above computed dependent projects based on symbol visibility.
Dependent projects computed in stage (1) are cached to avoid recomputation.
Returns information about where originate from. It's for both source and metadata symbols, and an optional if this
was a symbol from source.
Provides helper methods for finding dependent types (derivations, implementations, etc.) across a solution. This
is effectively a graph walk between INamedTypeSymbols walking down the inheritance hierarchy to find related
types based either on or .
While walking up the inheritance hierarchy is trivial (as the information is directly contained on the 's themselves), walking down is complicated. The general way this works is by using
out-of-band indices that are built that store this type information in a weak manner. Specifically, for both
source and metadata types we have indices that map between the base type name and the inherited type name. i.e.
for the case class A { } class B : A { } the index stores a link saying "There is a type 'A' somewhere
which has derived type called 'B' somewhere". So when the index is examined for the name 'A', it will say
'examine types called 'B' to see if they're an actual match'.
These links are then continually traversed to get the full set of results.
Walks down a 's inheritance tree looking for more 's
that match the provided predicate.
Called when a new match is found to check if that type's inheritance
tree should also be walked down. Can be used to stop the search early if a type could have no types that
inherit from it that would match this search.
If this search after finding the direct inherited types that match the provided
predicate, or if the search should continue recursively using those types as the starting point.
Moves all the types in to . If these are types we
haven't seen before, and the caller says we on them, then add
them to for the next round of searching.
We cache the project instance per . This allows us to reuse it over a wide set of
changes (for example, changing completely unrelated projects that a particular project doesn't depend on).
However, doesn't change even when certain things change that will create a
substantively different . For example, if the for the project changes, we'll still have the same project state.
As such, we store the of the project as well, ensuring that if anything in it or its
dependencies changes, we recompute the index.
Finds all the documents in the provided project that contain the requested string
values
Finds all the documents in the provided project that contain a global attribute in them.
If the `node` implicitly matches the `symbol`, then it will be added to `locations`.
Find references to a symbol inside global suppressions.
For example, consider a field 'Field' defined inside a type 'C'.
This field's documentation comment ID is 'F:C.Field'
A reference to this field inside a global suppression would be as following:
[assembly: SuppressMessage("RuleCategory", "RuleId', Scope = "member", Target = "~F:C.Field")]
Validate and split a documentation comment ID into a prefix and complete symbol ID. For the
~M:C.X(System.String), the would be
~M: and would be C.X(System.String).
Split a full documentation symbol ID into the core symbol ID and optional parameter list. For the
C.X(System.String), the would be
C.X and would be (System.String).
Validate and split symbol documentation comment ID.
For example, "~M:C.X(System.String)" represents the documentation comment ID of a method named 'X'
that takes a single string-typed parameter and is contained in a type named 'C'.
We divide the ID into 3 groups:
1. Prefix:
- Starts with an optional '~'
- Followed by a single capital letter indicating the symbol kind (for example, 'M' indicates method symbol)
- Followed by ':'
2. Core symbol ID, which is its fully qualified name before the optional parameter list and return type (i.e. before the '(' or '[' tokens)
3. Optional parameter list and/or return type that begins with a '(' or '[' tokens.
For the above example, "~M:" is the prefix, "C.X" is the core symbol ID and "(System.String)" is the parameter list.
Finds references to in this , but only if it referenced
though (which might be the actual name of the type, or a global alias to it).
The actual node that we found the reference on. Normally the 'Name' portion
of any piece of syntax. Might also be something like a 'foreach' statement node
when finding results for something like GetEnumerator.
The location we want want to return through the FindRefs API. The location contains
additional information (like if this was a Write, or if it was Implicit). This value
also has a property. Importantly, this value
is not necessarily the same location you would get by calling .. Instead, this location is where we want to navigate
the user to. A case where this can be different is with an indexer reference. The will be the node for the full 'ElementAccessExpression', whereas the
location we will take the user to will be the zero-length position immediately preceding
the `[` character.
Extensibility interface to allow individual languages to extend the 'Find References' service.
Languages can use this to provide specialized cascading logic between symbols that 'Find
References' is searching for.
Extensibility interface to allow extending the IFindReferencesService service. Implementations
must be thread-safe as the methods on this interface may be called on multiple threads
simultaneously. Implementations should also respect the provided cancellation token and
should try to cancel themselves quickly when requested.
Determines what, if any, global alias names could potentially map this symbol in this project.
Note that this result is allowed to return global aliases that don't actually map to this symbol.
For example, given symbol A.X and global alias G = B.X, G might be returned
in a search for A.X because they both end in X.
Called by the find references search engine when a new symbol definition is found.
Implementations can then choose to request more symbols be searched for. For example, an
implementation could choose for the find references search engine to cascade to
constructors when searching for standard types.
Implementations of this method must be thread-safe.
Called by the find references search engine to determine which documents in the supplied
project need to be searched for references. Only projects returned by
DetermineProjectsToSearch will be passed to this method.
Implementations should endeavor to keep the list of returned documents as small as
possible to keep search time down to a minimum. Returning the entire list of documents
in a project is not recommended (unless, of course, there is reasonable reason to
believe there are references in every document).
Implementations of this method must be thread-safe.
Called by the find references search engine to determine the set of reference locations
in the provided document. Only documents returned by DetermineDocumentsToSearch will be
passed to this method.
Implementations of this method must be thread-safe.
Looks for documents likely containing in them. That name will either be the actual
name of the named type we're looking for, or it might be a global alias to it.
Finds references to in this , but
only if it referenced though (which might be the actual name
of the type, or a global alias to it).
Finds references to in this , but only if it referenced
though (which might be the actual name of the type, or a global alias to it).
The list of common reference finders.
Caches information find-references needs associated with each document. Computed and cached so that multiple calls
to find-references in a row can share the same data.
Not used by FAR directly. But we compute and cache this while processing a document so that if we call any
other services that use this semantic model, that they don't end up recreating it.
Ephemeral information that find-references needs for a particular document when searching for a specific
symbol. Importantly, it contains the global aliases to that symbol within the current project.
Ephemeral information that find-references needs for a particular document when searching for a specific
symbol. Importantly, it contains the global aliases to that symbol within the current project.
A does-nothing version of the . Useful for
clients that have no need to report progress as they work.
Symbol set used when is . This symbol set will cascade up *and* down the inheritance hierarchy for all symbols we
are searching for. This is the symbol set used for features like 'Rename', where all cascaded symbols must
be updated in order to keep the code compiling.
When we're cascading in both direction, we can just keep all symbols in a single set. We'll always be
examining all of them to go in both up and down directions in every project we process. Any time we
add a new symbol to it we'll continue to cascade in both directions looking for more.
Scheduler we use when we're doing operations in the BG and we want to rate limit them to not saturate the threadpool.
Options to control the parallelism of the search. If we're in mode, we'll run all our tasks concurrently. Otherwise, we will
run them serially using
Notify the caller of the engine about the definitions we've found that we're looking for. We'll only notify
them once per symbol group, but we may have to notify about new symbols each time we expand our symbol set
when we walk into a new project.
A symbol set used when the find refs caller does not want cascading. This is a trivial impl that basically
just wraps the initial symbol provided and doesn't need to do anything beyond that.
A symbol set used when the find refs caller does not want cascading. This is a trivial impl that basically
just wraps the initial symbol provided and doesn't need to do anything beyond that.
Represents the set of symbols that the engine is searching for. While the find-refs engine is passed an
initial symbol to find results for, the engine will often have to 'cascade' that symbol to many more symbols
that clients will also need. This includes:
- Cascading to all linked symbols for the requested symbol. This ensures a unified set of results for a
particular symbol, regardless of what project context it was originally found in.
- Symbol specific cascading. For example, when searching for a named type, references to that named
type will be found through its constructors.
- Cascading up and down the inheritance hierarchy for members (e.g. methods, properties, events). This
is controllable through the
option.
Get a copy of all the symbols in the set. Cannot be called concurrently with
Update the set of symbols in this set with any appropriate symbols in the inheritance hierarchy brought
in within . For example, given a project 'A' with interface interface IGoo
{ void Goo(); }, and a project 'B' with class class Goo : IGoo { public void Goo() { } },
then initially the symbol set will only contain IGoo.Goo. However, when project 'B' is processed, this
will add Goo.Goo is added to the set as well so that references to it can be found.
This method is non thread-safe as it mutates the symbol set instance. As such, it should only be called
serially. should not be called concurrently with this.
Determines the initial set of symbols that we should actually be finding references for given a request
to find refs to . This will include any symbols that a specific cascades to, as well as all the linked symbols to those across any
multi-targeting/shared-project documents. This will not include symbols up or down the inheritance
hierarchy.
Finds all the symbols 'down' the inheritance hierarchy of in the given
project. The symbols found are added to . If did not
contain that symbol, then it is also added to to allow fixed point
algorithms to continue.
will always be a single project. We just pass this in as a set to
avoid allocating a fresh set every time this calls into FindMemberImplementationsArrayAsync.
Finds all the symbols 'up' the inheritance hierarchy of in the solution. The
symbols found are added to . If did not contain that symbol,
then it is also added to to allow fixed point algorithms to continue.
Symbol set used when is . This symbol set will only cascade in a uniform direction once it walks either up or down
from the initial set of symbols. This is the symbol set used for features like 'Find Refs', where we only
want to return location results for members that could feasible actually end up calling into that member at
runtime. See the docs of for more
information on this.
Symbol set used when is . This symbol set will only cascade in a uniform direction once it walks either up or down
from the initial set of symbols. This is the symbol set used for features like 'Find Refs', where we only
want to return location results for members that could feasible actually end up calling into that member at
runtime. See the docs of for more
information on this.
When we're doing a unidirectional find-references, the initial set of up-symbols can never change.
That's because we have computed the up set entirely up front, and no down symbols can produce new
up-symbols (as going down then up would not be unidirectional).
When searching for property, associate specific references we find to the relevant
accessor symbol (if there is one). For example, in C#, this would result in:
P = 0; // A reference to the P.set accessor
var v = P; // A reference to the P.get accessor
P++; // A reference to P.get and P.set accessors
nameof(P); // A reference only to P. Not associated with a particular accessor.
The default for this is false. With that default, all of the above references
are associated with the property P and not the accessors.
Whether or not we should cascade from the original search symbol to new symbols as we're
doing the find-references search.
Whether or not this find ref operation was explicitly invoked or not. If explicit invoked, the find
references operation may use more resources to get the results faster.
Features that run automatically should consider setting this to to avoid
unnecessarily impacting the user while they are doing other work.
When cascading if we should only travel in a consistent direction away from the starting symbol. For
example, starting on a virtual method, this would cascade upwards to implemented interface methods, and
downwards to overridden methods. However, it would not then travel back down to other implementations of
those interface methods. This is useful for cases where the client only wants references that could lead to
this symbol actually being called into at runtime.
There are cases where a client will not want this behavior. An example of that is 'Rename'. In rename,
there is a implicit link between members in a hierarchy with the same name (and appropriate signature). For example, in:
interface I { void Goo(); }
class C1 : I { public void Goo() { } }
class C2 : I { public void Goo() { } }
If C1.Goo is renamed, this will need to rename C2.Goo as well to keep the code properly
compiling. So, by default 'Rename' will cascade to all of these so it can appropriately update them. This
option is the more relevant with knowing if a particular reference would actually result in a call to the
original member, not if it has a relation to the original member.
Displays all definitions regardless of whether they have a reference or not.
When searching for property, associate specific references we find to the relevant
accessor symbol (if there is one). For example, in C#, this would result in:
P = 0; // A reference to the P.set accessor
var v = P; // A reference to the P.get accessor
P++; // A reference to P.get and P.set accessors
nameof(P); // A reference only to P. Not associated with a particular accessor.
The default for this is false. With that default, all of the above references
are associated with the property P and not the accessors.
Whether or not we should cascade from the original search symbol to new symbols as we're
doing the find-references search.
Whether or not this find ref operation was explicitly invoked or not. If explicit invoked, the find
references operation may use more resources to get the results faster.
Features that run automatically should consider setting this to to avoid
unnecessarily impacting the user while they are doing other work.
When cascading if we should only travel in a consistent direction away from the starting symbol. For
example, starting on a virtual method, this would cascade upwards to implemented interface methods, and
downwards to overridden methods. However, it would not then travel back down to other implementations of
those interface methods. This is useful for cases where the client only wants references that could lead to
this symbol actually being called into at runtime.
There are cases where a client will not want this behavior. An example of that is 'Rename'. In rename,
there is a implicit link between members in a hierarchy with the same name (and appropriate signature). For example, in:
interface I { void Goo(); }
class C1 : I { public void Goo() { } }
class C2 : I { public void Goo() { } }
If C1.Goo is renamed, this will need to rename C2.Goo as well to keep the code properly
compiling. So, by default 'Rename' will cascade to all of these so it can appropriately update them. This
option is the more relevant with knowing if a particular reference would actually result in a call to the
original member, not if it has a relation to the original member.
Displays all definitions regardless of whether they have a reference or not.
Returns the appropriate options for a given symbol for the specific 'Find References' feature. This should
not be used for other features (like 'Rename'). For the 'Find References' feature, if the user starts
searching on an accessor, then we want to give results associated with the specific accessor. Otherwise, if
they search on a property, then associate everything with the property. We also only want to travel an
inheritance hierarchy unidirectionally so that we only see potential references that could actually reach
this particular member.
A does-nothing version of the . Useful for
clients that have no need to report progress as they work.
Wraps an into an
so it can be used from the new streaming find references APIs.
Reports the progress of the FindReferences operation. Note: these methods may be called on
any thread.
Represents a group of s that should be treated as a single entity for
the purposes of presentation in a Find UI. For example, when a symbol is defined in a file
that is linked into multiple project contexts, there will be several unique symbols created
that we search for. Placing these in a group allows the final consumer to know that these
symbols can be merged together.
All the symbols in the group.
Reports the progress of the FindReferences operation. Note: these methods may be called on
any thread.
Represents a single result of the call to the synchronous
IFindReferencesService.FindReferences method. Finding the references to a symbol will result
in a set of definitions being returned (containing at least the symbol requested) as well as
any references to those definitions in the source. Multiple definitions may be found due to
how C# and VB allow a symbol to be both a definition and a reference at the same time (for
example, a method which implements an interface method).
The symbol definition that these are references to.
Same as but exposed as an for performance.
The set of reference locations in the solution.
Information about a reference to a symbol.
The document that the reference was found in.
If the symbol was bound through an alias, then this is the alias that was used.
The actual source location for a given symbol.
Indicates if this is an implicit reference to the definition. i.e. the definition wasn't
explicitly stated in the source code at this position, but it was still referenced. For
example, this can happen with special methods like GetEnumerator that are used
implicitly by a 'for each' statement.
Indicates if this is a location where the reference is written to.
Symbol usage info for this reference.
Additional properties for this reference
If this reference location is within a string literal, then this property
indicates the location of the containing string literal token.
Otherwise, .
Creates a reference location with the given properties.
Creates a reference location within a string literal.
For example, location inside the target string of a global SuppressMessageAttribute.
Indicates if this was not an exact reference to a location, but was instead a possible
location that was found through error tolerance. For example, a call to a method like
"Goo()" could show up as an error tolerance location to a method "Goo(int i)" if no
actual "Goo()" method existed.
Use an case-sensitive comparison when searching for matching items.
Use a case-insensitive comparison when searching for matching items.
Use a fuzzy comparison when searching for matching items. Fuzzy matching allows for
a certain amount of misspellings, missing words, etc. See for
more details.
Search term is matched in a custom manner (i.e. with a user provided predicate).
The name being searched for. Is null in the case of custom predicate searching.. But
can be used for faster index based searching when it is available.
The kind of search this is. Faster index-based searching can be used if the
SearchKind is not .
The predicate to fall back on if faster index searching is not possible.
Not readonly as this is mutable struct.
Increment this whenever the data format of the changes. This ensures
that we will not try to read previously cached data from a prior version of roslyn with a different format and
will instead regenerate all the indices with the new format.
Cache of ParseOptions to a checksum for the contained
within. Useful so we don't have to continually reenumerate and regenerate the checksum given how rarely
these ever change.
Collects all the definitions and
references that are reported independently and packages them up into the final list
of . This is used by the old non-streaming Find-References
APIs to return all the results at the end of the operation, as opposed to broadcasting
the results as they are found.
Collects all the definitions and
references that are reported independently and packages them up into the final list
of . This is used by the old non-streaming Find-References
APIs to return all the results at the end of the operation, as opposed to broadcasting
the results as they are found.
Contains information about a call from one symbol to another. The symbol making the call is
stored in CallingSymbol and the symbol that the call was made to is stored in CalledSymbol.
Whether or not the call is direct or indirect is also stored. A direct call is a call that
does not go through any other symbols in the inheritance hierarchy of CalledSymbol, while an
indirect call does go through the inheritance hierarchy. For example, calls through a base
member that this symbol overrides, or through an interface member that this symbol
implements will be considered 'indirect'.
The symbol that is calling the symbol being called.
The locations inside the calling symbol where the called symbol is referenced.
The symbol being called.
True if the CallingSymbol is directly calling CalledSymbol. False if it is calling a
symbol in the inheritance hierarchy of the CalledSymbol. For example, if the called
symbol is a class method, then an indirect call might be through an interface method that
the class method implements.
Obsolete. Use .
Finds the symbol that is associated with a position in the text of a document.
The semantic model associated with the document.
The character position within the document.
A workspace to provide context.
A CancellationToken.
Finds the symbol that is associated with a position in the text of a document.
The semantic model associated with the document.
The character position within the document.
A CancellationToken.
Finds the symbol that is associated with a position in the text of a document.
The semantic model associated with the document.
The character position within the document.
True to include the type of the symbol in the search.
A CancellationToken.
Finds the definition symbol declared in source code for a corresponding reference symbol.
Returns null if no such symbol can be found in the specified solution.
Finds symbols in the given compilation that are similar to the specified symbol.
A found symbol may be the exact same symbol instance if the compilation is the origin of the specified symbol,
or it may be a different symbol instance if the compilation is not the originating compilation.
Multiple symbols may be returned if there are ambiguous matches.
No symbols may be returned if the compilation does not define or have access to a similar symbol.
The symbol to find corresponding matches for.
A compilation to find the corresponding symbol within. The compilation may or may not be the origin of the symbol.
A CancellationToken.
If is declared in a linked file, then this function returns all the symbols that
are defined by the same symbol's syntax in the all projects that the linked file is referenced from.
In order to be returned the other symbols must have the same and as . This matches general user intuition that these are all
the 'same' symbol, and should be examined, regardless of the project context and they
originally started with.
Callback object we pass to the OOP server to hear about the result
of the FindReferencesEngine as it executes there.
Callback object we pass to the OOP server to hear about the result
of the FindReferencesEngine as it executes there.
Finds all the callers of a specified symbol.
Finds all the callers of a specified symbol.
Find the declared symbols from either source, referenced projects or metadata assemblies with the specified name.
Find the declared symbols from either source, referenced projects or metadata assemblies with the specified name.
Find the symbols for declarations made in source with a matching name.
Find the symbols for declarations made in source with a matching name.
Find the symbols for declarations made in source with a matching name.
Find the symbols for declarations made in source with a matching name.
Find the symbols for declarations made in source with the specified name.
Find the symbols for declarations made in source with the specified name.
Find the symbols for declarations made in source with the specified name.
Find the symbols for declarations made in source with the specified name.
Find the symbols for declarations made in source with the specified pattern. This pattern is matched
using heuristics that may change from release to release. So, the set of symbols matched by a given
pattern may change between releases. For example, new symbols may be matched by a pattern and/or
symbols previously matched by a pattern no longer are. However, the set of symbols matched by a
specific release will be consistent for a specific pattern.
Find the symbols for declarations made in source with the specified pattern. This pattern is matched
using heuristics that may change from release to release. So, the set of symbols matched by a given
pattern may change between releases. For example, new symbols may be matched by a pattern and/or
symbols previously matched by a pattern no longer are. However, the set of symbols matched by a
specific release will be consistent for a specific pattern.
Find the symbols for declarations made in source with the specified pattern. This pattern is matched
using heuristics that may change from release to release. So, the set of symbols matched by a given
pattern may change between releases. For example, new symbols may be matched by a pattern and/or
symbols previously matched by a pattern no longer are. However, the set of symbols matched by a
specific release will be consistent for a specific pattern.
Find the symbols for declarations made in source with the specified pattern. This pattern is matched
using heuristics that may change from release to release. So, the set of symbols matched by a given
pattern may change between releases. For example, new symbols may be matched by a pattern and/or
symbols previously matched by a pattern no longer are. However, the set of symbols matched by a
specific release will be consistent for a specific pattern.
Finds all references to a symbol throughout a solution
The symbol to find references to.
The solution to find references within.
A cancellation token.
Finds all references to a symbol throughout a solution
The symbol to find references to.
The solution to find references within.
A set of documents to be searched. If documents is null, then that means "all documents".
A cancellation token.
Finds all references to a symbol throughout a solution
The symbol to find references to.
The solution to find references within.
An optional progress object that will receive progress
information as the search is undertaken.
An optional set of documents to be searched. If documents is null, then that means "all documents".
An optional cancellation token.
Verifies that all pairs of named types in equivalentTypesWithDifferingAssemblies are equivalent forwarded types.
Returns if was forwarded to in
's .
Find symbols for members that override the specified member symbol.
Use this overload to avoid boxing the result into an .
Find symbols for declarations that implement members of the specified interface symbol
Use this overload to avoid boxing the result into an .
Finds all the derived classes of the given type. Implementations of an interface are not considered
"derived", but can be found with .
The symbol to find derived types of.
The solution to search in.
The projects to search. Can be null to search the entire solution.
The derived types of the symbol. The symbol passed in is not included in this list.
Finds the derived classes of the given type. Implementations of an interface are not considered
"derived", but can be found with .
The symbol to find derived types of.
The solution to search in.
If the search should stop at immediately derived classes, or should continue past that.
The projects to search. Can be null to search the entire solution.
The derived types of the symbol. The symbol passed in is not included in this list.
Use this overload to avoid boxing the result into an .
Finds the derived interfaces of the given interfaces.
The symbol to find derived types of.
The solution to search in.
If the search should stop at immediately derived interfaces, or should continue past that.
The projects to search. Can be null to search the entire solution.
The derived interfaces of the symbol. The symbol passed in is not included in this list.
Use this overload to avoid boxing the result into an .
Finds the accessible or types that implement the given
interface.
The symbol to find derived types of.
The solution to search in.
If the search should stop at immediately derived interfaces, or should continue past that.
The projects to search. Can be null to search the entire solution.
Use this overload to avoid boxing the result into an .
Finds all the accessible symbols that implement an interface or interface member. For an this will be both immediate and transitive implementations.
Use this overload to avoid boxing the result into an .
Computes and caches indices for the source symbols in s and
for metadata symbols in s.
Can't be null. Even if we weren't able to read in metadata, we'll still create an empty
index.
The set of projects that are referencing this metadata-index. When this becomes empty we can dump the
index from memory.
Accesses to this collection must lock the set.
Same value as SolutionCrawlerTimeSpan.EntireProjectWorkerBackOff
Scheduler to run our tasks on. If we're in the remote host , we'll run all our tasks concurrently.
Otherwise, we will run them serially using
Gets the latest computed for the requested .
This may return an index corresponding to a prior version of the reference if it has since changed.
Another system is responsible for bringing these indices up to date in the background.
Represents a tree of names of the namespaces, types (and members within those types) within a or . This tree can be used to quickly determine if
there is a name match, and can provide the named path to that named entity. This path can then be used to
produce a corresponding that can be used by a feature. The primary purpose of this index
is to allow features to quickly determine that there is no name match, so that acquiring symbols is not
necessary. The secondary purpose is to generate a minimal set of symbols when there is a match, though that
will still incur a heavy cost (for example, getting the root symbol for a
particular project).
The list of nodes that represent symbols. The primary key into the sorting of this list is the name. They
are sorted case-insensitively . Finding case-sensitive matches can be found by binary searching for
something that matches insensitively, and then searching around that equivalence class for one that matches.
Inheritance information for the types in this assembly. The mapping is between
a type's simple name (like 'IDictionary') and the simple metadata names of types
that implement it or derive from it (like 'Dictionary').
Note: to save space, all names in this map are stored with simple ints. These
ints are the indices into _nodes that contain the nodes with the appropriate name.
This mapping is only produced for metadata assemblies.
Maps the name of receiver type name to its .
for the definition of simple/complex methods.
For non-array simple types, the receiver type name would be its metadata name, e.g. "Int32".
For any array types with simple type as element, the receiver type name would be just "ElementTypeName[]", e.g. "Int32[]" for int[][,]
For non-array complex types, the receiver type name is "".
For any array types with complex type as element, the receiver type name is "[]"
Finds symbols in this assembly that match the provided name in a fuzzy manner.
Returns if this index contains some symbol that whose name matches case sensitively. otherwise.
Get all symbols that have a name matching the specified name.
Searches for a name in the ordered list that matches per the .
Used to produce the simple-full-name components of a type from metadata.
The name is 'simple' in that it does not contain things like backticks,
generic arguments, or nested type + separators. Instead just hte name
of the type, any containing types, and the component parts of its namespace
are added. For example, for the type "X.Y.O`1.I`2, we will produce [X, Y, O, I]
s are produced when initially creating our indices.
They store Names of symbols and the index of their parent symbol. When we
produce the final though we will then convert
these to s.
s are produced when initially creating our indices.
They store Names of symbols and the index of their parent symbol. When we
produce the final though we will then convert
these to s.
The Name of this Node.
Index in of the parent Node of this Node.
Value will be if this is the
Node corresponding to the root symbol.
This is the type name of the parameter when is false.
For array types, this is just the element type name.
e.g. `int` for `int[][,]`
Indicate if the type of parameter is any kind of array.
This is relevant for both simple and complex types. For example:
- array of simple type like int[], int[][], int[][,], etc. are all ultimately represented as "int[]" in index.
- array of complex type like T[], T[][], etc are all represented as "[]" in index,
in contrast to just "" for non-array types.
Similar to , we divide extension methods into
simple and complex categories for filtering purpose. Whether a method is simple is determined based on
if we can determine it's receiver type easily with a pure text matching. For complex methods, we will
need to rely on symbol to decide if it's feasible.
Simple types include:
- Primitive types
- Types which is not a generic method parameter
- By reference type of any types above
- Array types with element of any types above
Name of the extension method.
This can be used to retrieve corresponding symbols via
Fully qualified name for the type that contains this extension method.
Cache the symbol tree infos for assembly symbols produced from a particular . Generating symbol trees for metadata can be expensive (in large
metadata cases). And it's common for us to have many threads to want to search the same metadata
simultaneously. As such, we use an AsyncLazy to compute the value that can be shared among all callers.
We store this keyed off of the produced by . This
ensures that
Similar to except that this caches based on metadata id. The primary
difference here is that you can have the same MetadataId from two different s, while having different checksums. For example, if the aliases of a
are changed (see , then it will have a different
checksum, but same metadata ID. As such, we can use this table to ensure we only do the expensive
computation of the once per , but we may then have to
make a copy of it with a new if the checksums differ.
Produces a for a given .
Note: will never return null;
Optional checksum for the (produced by ). Can be provided if already computed. If not provided it will be computed
and used for the .
Produces a for a given .
Note: will never return null;
Optional checksum for the (produced by ). Can be provided if already computed. If not provided it will be computed
and used for the .
Loads any info we have for this reference from our persistence store. Will succeed regardless of the
checksum of the . Should only be used by clients that are ok with potentially
stale data.
Represent this as non-null because that will be true when this is not in a pool and it is being used by
other services.
Only applies to member kind. Represents the type info of the first parameter.
Generalized function for loading/creating/persisting data. Used as the common core code for serialization
of source and metadata SymbolTreeInfos.
Loads any info we have for this project from our persistence store. Will succeed regardless of the
checksum of the . Should only be used by clients that are ok with potentially
stale data.
Cache of project to the checksum for it so that we don't have to expensively recompute
this each time we get a project.
Returns true when the identifier is probably (but not guaranteed) to be within the
syntax tree. Returns false when the identifier is guaranteed to not be within the
syntax tree.
Returns true when the identifier is probably (but not guaranteed) escaped within the
text of the syntax tree. Returns false when the identifier is guaranteed to not be
escaped within the text of the syntax tree. An identifier that is not escaped within
the text can be found by searching the text directly. An identifier that is escaped can
only be found by parsing the text and syntactically interpreting any escaping
mechanisms found in the language ("\uXXXX" or "@XXXX" in C# or "[XXXX]" in Visual
Basic).
Returns true when the identifier is probably (but not guaranteed) to be within the
syntax tree. Returns false when the identifier is guaranteed to not be within the
syntax tree.
String interning table so that we can share many more strings in our DeclaredSymbolInfo
buckets. Keyed off a Project instance so that we share all these strings as we create
the or load the index items for this a specific Project. This helps as we will generally
be creating or loading all the index items for the documents in a Project at the same time.
Once this project is let go of (which happens with any solution change) then we'll dump
this string table. The table will have already served its purpose at that point and
doesn't need to be kept around further.
Gets the set of global aliases that point to something with the provided name and arity.
For example of there is global alias X = A.B.C<int>, then looking up with
name="C" and arity=1 will return X.
The name to pattern match against, and to show in a final presentation layer.
An optional suffix to be shown in a presentation layer appended to .
Container of the symbol that can be shown in a final presentation layer.
For example, the container of a type "KeyValuePair" might be
"System.Collections.Generic.Dictionary<TKey, TValue>". This can
then be shown with something like "type System.Collections.Generic.Dictionary<TKey, TValue>"
to indicate where the symbol is located.
Dotted container name of the symbol, used for pattern matching. For example
The fully qualified container of a type "KeyValuePair" would be
"System.Collections.Generic.Dictionary" (note the lack of type parameters).
This way someone can search for "D.KVP" and have the "D" part of the pattern
match against this. This should not be shown in a presentation layer.
The names directly referenced in source that this type inherits from.
Same as , just stored as a set for easy containment checks.
Name of the extension method's receiver type to the index of its DeclaredSymbolInfo in `_declarationInfo`.
For simple types, the receiver type name is it's metadata name. All predefined types are converted to its metadata form.
e.g. int => Int32. For generic types, type parameters are ignored.
For complex types, the receiver type name is "".
For any kind of array types, it's "{element's receiver type name}[]".
e.g.
int[][,] => "Int32[]"
T (where T is a type parameter) => ""
T[,] (where T is a type parameter) => "T[]"
Helper comparer to enable consumers of to process references found in linked files only a single time.
Base implementation of C# and VB formatting services.
Formats whitespace in documents or syntax trees.
The annotation used to mark portions of a syntax tree to be formatted.
Gets the formatting rules that would be applied if left unspecified.
Formats the whitespace in a document.
The document to format.
An optional set of formatting options. If these options are not supplied the current set of options from the document's workspace will be used.
An optional cancellation token.
The formatted document.
Formats the whitespace in an area of a document corresponding to a text span.
The document to format.
The span of the document's text to format.
An optional set of formatting options. If these options are not supplied the current set of options from the document's workspace will be used.
An optional cancellation token.
The formatted document.
Formats the whitespace in areas of a document corresponding to multiple non-overlapping spans.
The document to format.
The spans of the document's text to format.
An optional set of formatting options. If these options are not supplied the current set of options from the document's workspace will be used.
An optional cancellation token.
The formatted document.
Formats the whitespace in areas of a document corresponding to annotated nodes.
The document to format.
The annotation used to find on nodes to identify spans to format.
An optional set of formatting options. If these options are not supplied the current set of options from the document's workspace will be used.
An optional cancellation token.
The formatted document.
Formats the whitespace in areas of a syntax tree corresponding to annotated nodes.
The root node of a syntax tree to format.
The annotation used to find nodes to identify spans to format.
A workspace used to give the formatting context.
An optional set of formatting options. If these options are not supplied the current set of options from the workspace will be used.
An optional cancellation token.
The formatted tree's root node.
Formats the whitespace of a syntax tree.
The root node of a syntax tree to format.
A workspace used to give the formatting context.
An optional set of formatting options. If these options are not supplied the current set of options from the workspace will be used.
An optional cancellation token.
The formatted tree's root node.
Formats the whitespace in areas of a syntax tree identified by a span.
The root node of a syntax tree to format.
The span within the node's full span to format.
A workspace used to give the formatting context.
An optional set of formatting options. If these options are not supplied the current set of options from the workspace will be used.
An optional cancellation token.
The formatted tree's root node.
Formats the whitespace in areas of a syntax tree identified by multiple non-overlapping spans.
The root node of a syntax tree to format.
The spans within the node's full span to format.
A workspace used to give the formatting context.
An optional set of formatting options. If these options are not supplied the current set of options from the workspace will be used.
An optional cancellation token.
The formatted tree's root node.
Determines the changes necessary to format the whitespace of a syntax tree.
The root node of a syntax tree to format.
A workspace used to give the formatting context.
An optional set of formatting options. If these options are not supplied the current set of options from the workspace will be used.
An optional cancellation token.
The changes necessary to format the tree.
Determines the changes necessary to format the whitespace of a syntax tree.
The root node of a syntax tree to format.
The span within the node's full span to format.
A workspace used to give the formatting context.
An optional set of formatting options. If these options are not supplied the current set of options from the workspace will be used.
An optional cancellation token.
The changes necessary to format the tree.
Determines the changes necessary to format the whitespace of a syntax tree.
The root node of a syntax tree to format.
The spans within the node's full span to format.
A workspace used to give the formatting context.
An optional set of formatting options. If these options are not supplied the current set of options from the workspace will be used.
An optional cancellation token.
The changes necessary to format the tree.
Organizes the imports in the document.
The document to organize.
The cancellation token that the operation will observe.
The document with organized imports. If the language does not support organizing imports, or if no changes were made, this method returns .
Formats the whitespace in areas of a document corresponding to multiple non-overlapping spans.
The document to format.
The spans of the document's text to format. If null, the entire document should be formatted.
Line formatting options.
Formatting options, if available. Null for non-Roslyn languages.
Cancellation token.
The formatted document.
Provide a custom formatting operation provider that can intercept/filter/replace default formatting operations.
All methods defined in this class can be called concurrently. Must be thread-safe.
Returns SuppressWrappingIfOnSingleLineOperations under a node either by itself or by
filtering/replacing operations returned by NextOperation
returns AnchorIndentationOperations under a node either by itself or by filtering/replacing operations returned by NextOperation
returns IndentBlockOperations under a node either by itself or by filtering/replacing operations returned by NextOperation
returns AlignTokensOperations under a node either by itself or by filtering/replacing operations returned by NextOperation
returns AdjustNewLinesOperation between two tokens either by itself or by filtering/replacing a operation returned by NextOperation
returns AdjustSpacesOperation between two tokens either by itself or by filtering/replacing a operation returned by NextOperation
Returns SuppressWrappingIfOnSingleLineOperations under a node either by itself or by
filtering/replacing operations returned by NextOperation
returns AnchorIndentationOperations under a node either by itself or by filtering/replacing operations returned by NextOperation
returns IndentBlockOperations under a node either by itself or by filtering/replacing operations returned by NextOperation
returns AlignTokensOperations under a node either by itself or by filtering/replacing operations returned by NextOperation
returns AdjustNewLinesOperation between two tokens either by itself or by filtering/replacing a operation returned by NextOperation
returns AdjustSpacesOperation between two tokens either by itself or by filtering/replacing a operation returned by NextOperation
indicate how many lines are needed between two tokens
Options for .
-
the operation will leave lineBreaks as it is if original lineBreaks are equal or greater than given lineBreaks
-
the operation will force existing lineBreaks to the given lineBreaks
indicate how many spaces are needed between two spaces
Options for .
Preserve spaces as it is
means a default space operation created by the formatting
engine by itself. It has its own option kind to indicates that this is an operation
generated by the engine itself.
means forcing the specified spaces between two tokens if two
tokens are on a single line.
means forcing the specified spaces regardless of positions of two tokens.
If two tokens are on a single line, second token will be placed at current indentation if possible
align first tokens on lines among the given tokens to the base token
option to control behavior
preserve relative spaces between anchor token and first tokens on lines within the given text span
as long as it doesn't have explicit line operations associated with them
create anchor indentation region around start and end token
right after anchor token to end of end token will become anchor region
create anchor indentation region more explicitly by providing all necessary information.
create suppress region around start and end token
create suppress region around the given text span
create indent block region around the start and end token with the given indentation delta added to the existing indentation at the position of the start token
create indent block region around the given text span with the given indentation delta added to the existing indentation at the position of the start token
create indent block region around the start and end token with the given indentation delta added to the column of the base token
create indent block region around the given text span with the given indentation delta added to the column of the base token
instruct the engine to try to align first tokens on the lines among the given tokens to be aligned to the base token
instruct the engine to try to put the give lines between two tokens
instruct the engine to try to put the given spaces between two tokens
return AnchorIndentationOperation for the node provided by the given formatting rules
return IndentBlockOperation for the node provided by the given formatting rules
return AlignTokensOperation for the node provided by the given formatting rules
return AdjustNewLinesOperation for the node provided by the given formatting rules
return AdjustSpacesOperation for the node provided by the given formatting rules
set indentation level for the given text span. it can be relative, absolute or dependent to other tokens
Options for .
This indentation will be a delta to the first token in the line in which the base token is present
will be interpreted as delta of its enclosing indentation
will be interpreted as absolute position
Mask for relative position options
Mask for position options.
Each specifies one of the position options to indicate the primary
behavior for the operation.
Increase the if the block is part of a
condition of the anchor token. For example:
if (value is
{ // This open brace token is part of a condition of the 'if' token.
Length: 2
})
suppress formatting operations within the given text span
Options for .
-
no wrapping if given tokens are on same line
-
no wrapping regardless of relative positions of two tokens
-
no spacing regardless of relative positions of two tokens
Completely disable formatting within a span.
Automatic (on-type) formatting options.
Formatting options stored in editorconfig.
For use in the shared CodeStyle layer. Keep in syntax with FormattingOptions.IndentStyle.
Default value of 120 was picked based on the amount of code in a github.com diff at 1080p.
That resolution is the most common value as per the last DevDiv survey as well as the latest
Steam hardware survey. This also seems to a reasonable length default in that shorter
lengths can often feel too cramped for .NET languages, which are often starting with a
default indentation of at least 16 (for namespace, class, member, plus the final construct
indentation).
TODO: Currently the option has no storage and always has its default value. See https://github.com/dotnet/roslyn/pull/30422#issuecomment-436118696.
Internal option -- not exposed to tooling.
Internal option -- not exposed to editorconfig tooling.
Options that we expect the user to set in editorconfig.
Options that can be set via editorconfig but we do not provide tooling support.
Language agnostic defaults.
a tweaked version of our interval tree to meet the formatting engine's need
it now has an ability to return a smallest span that contains a position rather than
all Intersecting or overlapping spans
this class maintain contextual information such as
indentation of current position, based token to follow in current position and etc.
data that will be used in an interval tree related to Anchor.
data that will be used in an interval tree related to Anchor.
data that will be used in an interval tree related to indentation.
data that will be used in an interval tree related to indentation.
Caches the value produced by .
if the field is not yet initialized; otherwise, the value
returned from .
Represents an indentation in which a fixed offset () is applied to a reference
indentation amount ().
The reference indentation data which needs to be adjusted.
The adjustment to apply to the value providede by
.
data that will be used in an interval tree related to suppressing spacing operations.
data that will be used in an interval tree related to suppressing spacing operations.
data that will be used in an interval tree related to suppressing wrapping operations.
data that will be used in an interval tree related to suppressing wrapping operations.
rewrite the node with the given trivia information in the map
It is very common to be formatting lots of documents at teh same time, with the same set of formatting rules and
options. To help with that, cache the last set of ChainedFormattingRules that was produced, as it is not a cheap
type to create.
Stored as a instead of a so we don't have
to worry about torn write concerns.
return summary for current formatting work
this actually applies formatting operations to trivia between two tokens
this actually applies formatting operations to trivia between two tokens
span in the tree to format
rewrite the tree info root node with the trivia information in the map
represents a general trivia between two tokens. slightly more expensive than others since it
needs to calculate stuff unlike other cases
this collector gathers formatting operations that are based on a node
it represents a token that is inside of token stream not also outside of token stream
it uses an index to navigate previous and after tokens in the stream to make navigation faster. and regular
Previous/NextToken for tokens outside of the stream.
this object is supposed to be live very short but created a lot of time. that is why it is struct.
(same reason why SyntaxToken is struct - to reduce heap allocation)
it holds onto space and wrapping operation need to run between two tokens.
This class takes care of tokens consumed in the formatting engine.
It will maintain information changed compared to original token information. and answers
information about tokens.
Thread-safe collection that holds onto changes
Get column of the token
* column means text position on a line where all tabs are converted to spaces that first position on a line becomes 0
this provides information about the syntax tree formatting service is formatting.
this provides necessary abstraction between different kinds of syntax trees so that ones that contain
actual text or cache can answer queries more efficiently.
it holds onto trivia information between two tokens
This is the ID reported for formatting diagnostics.
This special diagnostic can be suppressed via #pragma to prevent the formatter from making changes to
code formatting within the span where the diagnostic is suppressed.
Contains changes that can be either applied to different targets such as a buffer or a tree
or examined to be used in other places such as quick fix.
set up space string caches
format the trivia at the line column and put changes to the changes
create whitespace for the delta at the line column and put changes to the changes
return whether this formatting succeeded or not
for example, if there is skipped tokens in one of trivia between tokens
we consider formatting this region is failed
check whether given trivia is whitespace trivia or not
check whether given trivia is end of line trivia or not
true if previoustrivia is _ and nextTrivia is a Visual Basic comment
check whether given trivia is a Comment in VB or not
It is never reachable in C# since it follows a test for
LineContinuation Character.
check whether given string is either null or whitespace
check whether given char is whitespace
check whether given char is new line char
create whitespace trivia
create end of line trivia
return line column rule for the given two trivia
format the given trivia at the line column position and put result to the changes list
format the given trivia at the line column position and put text change result to the changes list
returns true if the trivia contains a Line break
get line column rule between two trivia
if the given trivia is the very first or the last trivia between two normal tokens and
if the trivia is structured trivia, get one token that belongs to the structured trivia and one belongs to the normal token stream
check whether string between start and end position only contains whitespace
check whether first line between two tokens contains only whitespace
return 0 or 1 based on line column of the trivia1's end point
this is based on our structured trivia's implementation detail that some structured trivia can have
one new line at the end of the trivia
absolute line number from first token
absolute column from beginning of a line
there is only whitespace on this line
relative line number between calls
relative spaces between calls
there is only whitespace in this space
force text change regardless line and space changes
Get the name of the target type of specified extension method declaration. The node provided must be an
extension method declaration, i.e. calling `TryGetDeclaredSymbolInfo()` on `node` should return a
`DeclaredSymbolInfo` of kind `ExtensionMethod`. If the return value is null, then it means this is a
"complex" method (as described at ).
Provides helpers for working across "blocks" of statements in an agnostic fashion across VB and C#. Both
languages have quirks here that this API attempts to smooth out. For example, many things in VB are 'blocks'
(like ClassBlocks and MethodBlocks). However, only a subset of those can have executable statements. Similarly,
C# has actual BlockSyntax nodes ({ ... }), but it can also have sequences of executable statements not
contained by those (for example statements in a case-clause in a switch-statement).
A block that has no semantics other than introducing a new scope. That is only C# BlockSyntax.
Gets the directly parenting block for the statement if it has one. For C#, this is the direct parent
BlockSyntax, SwitchSectionSyntax, or CompilationUnit if the statement is parented by a GlobalStatementSyntax.
This returns for any other cases (like an embedded statement).
If this returns a parent value then the will always be found within the
statements returned by on that value
A node that contains a list of statements. In C#, this is BlockSyntax, SwitchSectionSyntax and
CompilationUnitSyntax. In VB, this includes all block statements such as a MultiLineIfBlockSyntax.
A node that can host a list of statements or a single statement. In addition to every "executable block",
this also includes C# embedded statement owners.
Gets the statement container node for the statement .
The statement container for .
Checks if the position is on the header of a type (from the start of the type up through it's name).
Tries to get an ancestor of a Token on current position or of Token directly to left:
e.g.: tokenWithWantedAncestor[||]tokenWithoutWantedAncestor
Returns whether a given declaration can have accessibility or not.
The declaration node to check
A flag that indicates whether to consider modifiers on the given declaration that blocks adding accessibility.
True if this language supports implementing an interface by signature only. If false,
implementations must specific explicitly which symbol they're implementing.
True if anonymous functions in this language have signatures that include named
parameters that can be referenced later on when the function is invoked. Or, if the
anonymous function is simply a signature that will be assigned to a delegate, and the
delegate's parameter names are used when invoking.
For example, in VB one can do this:
dim v = Sub(x as Integer) Blah()
v(x:=4)
However, in C# that would need to be:
Action<int> v = (int x) => Blah();
v(obj:=4)
Note that in VB one can access 'x' outside of the declaration of the anonymous type.
While in C# 'x' can only be accessed within the anonymous type.
True if a write is performed to the given expression. Note: reads may also be performed
to the expression as well. For example, "++a". In this expression 'a' is both read from
and written to.
True if a write is performed to the given expression. Note: unlike IsWrittenTo, this
will not return true if reads are performed on the expression as well. For example,
"++a" will return 'false'. However, 'a' in "out a" or "a = 1" will return true.
return speculative semantic model for supported node. otherwise, it will return null
get all alias names defined in the semantic model
Finds all local function definitions within the syntax references for a given
Gets the that the given node involves.
The node's kind must match any of the following kinds:
- ,
- , or
- .
Given a location in a document, returns the symbol that intercepts the original symbol called at that location.
The position must be the location of an identifier token used as the name of an invocation expression.
controls how much of the type header should be considered. If only the span up through the type name will be considered. If
then the span through the base-list will be considered.
Contains helpers to allow features and other algorithms to run over C# and Visual Basic code in a uniform fashion.
It should be thought of a generalized way to apply type-pattern-matching and syntax-deconstruction in a uniform
fashion over the languages. Helpers in this type should only be one of the following forms:
-
'IsXXX' where 'XXX' exactly matches one of the same named syntax (node, token, trivia, list, etc.) constructs that
both C# and VB have. For example 'IsSimpleName' to correspond to C# and VB's SimpleNameSyntax node. These 'checking'
methods should never fail. For non leaf node types this should be implemented as a typecheck ('is' in C#, 'typeof ... is'
in VB). For leaf nodes, this should be implemented by deffering to to check against the
raw kind of the node.
-
'GetPartsOfXXX(SyntaxNode node, out SyntaxNode/SyntaxToken part1, ...)' where 'XXX' one of the same named Syntax constructs
that both C# and VB have, and where the returned parts correspond to the members those nodes have in common across the
languages. For example 'GetPartsOfQualifiedName(SyntaxNode node, out SyntaxNode left, out SyntaxToken dotToken, out SyntaxNode right)'
VB. These functions should throw if passed a node that the corresponding 'IsXXX' did not return for.
For nodes that only have a single child, 'GetPartsOfXXX' is not not needed and can be replaced with the easier to use
'GetXXXOfYYY' to get that single child.
-
'GetXxxOfYYY' where 'XXX' matches the name of a property on a 'YYY' syntax construct that both C# and VB have. For
example 'GetExpressionOfMemberAccessExpression' corresponding to MemberAccessExpressionsyntax.Expression in both C# and
VB. These functions should throw if passed a node that the corresponding 'IsYYY' did not return for.
For nodes that only have a single child, these functions can stay here. For nodes with multiple children, these should migrate
to and be built off of 'GetPartsOfXXX'.
-
Absolutely trivial questions that relate to syntax and can be asked sensibly of each language. For example,
if certain constructs (like 'patterns') are supported in that language or not.
Importantly, avoid:
-
Functions that attempt to blur the lines between similar constructs in the same language. For example, a QualifiedName
is not the same as a MemberAccessExpression (despite A.B being representable as either depending on context).
Features that need to handle both should make it clear that they are doing so, showing that they're doing the right
thing for the contexts each can arise in (for the above example in 'type' vs 'expression' contexts).
-
Functions which are effectively specific to a single feature are are just trying to find a place to place complex
feature logic in a place such that it can run over VB or C#. For example, a function to determine if a position
is on the 'header' of a node. a 'header' is a not a well defined syntax concept that can be trivially asked of
nodes in either language. It is an excapsulation of a feature (or set of features) level idea that should be in
its own dedicated service.
-
Functions that mutate or update syntax constructs for example 'WithXXX'. These should be on SyntaxGenerator or
some other feature specific service.
-
Functions that a single item when one language may allow for multiple. For example 'GetIdentifierOfVariableDeclarator'.
In VB a VariableDeclarator can itself have several names, so calling code must be written to check for that and handle
it apropriately. Functions like this make it seem like that doesn't need to be considered, easily allowing for bugs
to creep in.
-
Abbreviating or otherwise changing the names that C# and VB share here. For example use 'ObjectCreationExpression'
not 'ObjectCreation'. This prevents accidental duplication and keeps consistency with all members.
Many helpers in this type currently violate the above 'dos' and 'do nots'. They should be removed and either
inlined directly into the feature that needs if (if only a single feature), or moved into a dedicated service
for that purpose if needed by multiple features.
Returns 'true' if this a 'reserved' keyword for the language. A 'reserved' keyword is a
identifier that is always treated as being a special keyword, regardless of where it is
found in the token stream. Examples of this are tokens like and
in C# and VB respectively.
Importantly, this does *not* include contextual keywords. If contextual keywords are
important for your scenario, use or . Also, consider using
if all you need is the ability to know
if this is effectively any identifier in the language, regardless of whether the language
is treating it as a keyword or not.
Returns if this a 'contextual' keyword for the language. A
'contextual' keyword is a identifier that is only treated as being a special keyword in
certain *syntactic* contexts. Examples of this is 'yield' in C#. This is only a
keyword if used as 'yield return' or 'yield break'. Importantly, identifiers like , and are *not*
'contextual' keywords. This is because they are not treated as keywords depending on
the syntactic context around them. Instead, the language always treats them identifiers
that have special *semantic* meaning if they end up not binding to an existing symbol.
Importantly, if is not in the syntactic construct where the
language thinks an identifier should be contextually treated as a keyword, then this
will return .
Or, in other words, the parser must be able to identify these cases in order to be a
contextual keyword. If identification happens afterwards, it's not contextual.
The set of identifiers that have special meaning directly after the `#` token in a
preprocessor directive. For example `if` or `pragma`.
Get the node on the left side of the dot if given a dotted expression.
In VB, we have a member access expression with a null expression, this may be one of the
following forms:
1) new With { .a = 1, .b = .a .a refers to the anonymous type
2) With obj : .m .m refers to the obj type
3) new T() With { .a = 1, .b = .a 'a refers to the T type
If `allowImplicitTarget` is set to true, the returned node will be set to approperiate node, otherwise, it will return null.
This parameter has no affect on C# node.
Gets the containing expression that is actually a language expression and not just typed
as an ExpressionSyntax for convenience. For example, NameSyntax nodes on the right side
of qualified names and member access expressions are not language expressions, yet the
containing qualified names or member access expressions are indeed expressions.
Call on the `.y` part of a `x?.y` to get the entire `x?.y` conditional access expression. This also works
when there are multiple chained conditional accesses. For example, calling this on '.y' or '.z' in
`x?.y?.z` will both return the full `x?.y?.z` node. This can be used to effectively get 'out' of the RHS of
a conditional access, and commonly represents the full standalone expression that can be operated on
atomically.
Returns the expression node the member is being accessed off of. If
is , this will be the node directly to the left of the dot-token. If
is , then this can return another node in the tree that the member will be accessed
off of. For example, in VB, if you have a member-access-expression of the form ".Length" then this
may return the expression in the surrounding With-statement.
True if this is an argument with just an expression and nothing else (i.e. no ref/out,
no named params, no omitted args).
Returns true for nodes that represent the body of a method.
For VB this will be
MethodBlockBaseSyntax. This will be true for things like constructor, method, operator
bodies as well as accessor bodies. It will not be true for things like sub() function()
lambdas.
For C# this will be the BlockSyntax or ArrowExpressionSyntax for a
method/constructor/deconstructor/operator/accessor. It will not be included for local
functions.
Returns true if the given character is a character which may be included in an
identifier to specify the type of a variable.
Given a , return the representing the span of the member body
it is contained within. This is used to determine whether speculative binding should be
used in performance-critical typing scenarios. Note: if this method fails to find a relevant span, it returns
an empty at position 0.
Returns the parent node that binds to the symbols that the IDE prefers for features like Quick Info and Find
All References. For example, if the token is part of the type of an object creation, the parenting object
creation expression is returned so that binding will return constructor symbols.
Given a , that represents and argument return the string representation of
that arguments name.
Given a , that represents an attribute argument return the string representation of
that arguments name.
Determines if there is preprocessor trivia *between* any of the
provided. The will be deduped and then ordered by position.
Specifically, the first token will not have it's leading trivia checked, and the last
token will not have it's trailing trivia checked. All other trivia will be checked to
see if it contains a preprocessor directive.
Similar to , this gets the containing
expression that is actually a language expression and not just typed as an ExpressionSyntax for convenience.
However, this goes beyond that that method in that if this expression is the RHS of a conditional access
(i.e. a?.b()) it will also return the root of the conditional access expression tree.
The intuition here is that this will give the topmost expression node that could realistically be
replaced with any other expression. For example, with a?.b() technically .b() is an
expression. But that cannot be replaced with something like (1 + 1) (as a?.(1 + 1) is not
legal). However, in a?.b(), then a itself could be replaced with (1 + 1)?.b() to form
a legal expression.
Provides a uniform view of SyntaxKinds over C# and VB for constructs they have
in common.
Gets the syntax kind for a multi-line comment.
The raw syntax kind for a multi-line comment; otherwise, if the language does not
support multi-line comments.
Provides a uniform view of SyntaxKinds over C# and VB for constructs they have
in common.
Helper service for telling you what type can be inferred to be viable in a particular
location in code. This is useful for features that are starting from code that doesn't bind,
but would like to know type that code should be in the location that it can be found in. For
example:
int i = Here();
If 'Here()' doesn't bind, then this class can be used to say that it is currently in a
location whose type has been inferred to be 'int' from the surrounding context. Note: this
is simply a best effort guess. 'byte/short/etc.' as well as any user convertible types to
int would also be valid here, however 'int' seems the most reasonable when considering user
intuition.
Retrieves all symbols that could collide with a symbol at the specified location.
A symbol can possibly collide with the location if it is available to that location and/or
could cause a compiler error if its name is re-used at that location.
Given a symbol in source, returns the syntax nodes that compromise its declarations.
This differs from symbol.Locations in that Locations returns a list of ILocations that
normally correspond to the name node of the symbol.
helper class to aggregate some numeric value log in client side
The key here is an object even though we will often be putting enums into this map; the problem with the use of enums or other value
types is they prevent the runtime from sharing the same JITted code for each different generic instantiation. In this case,
the cost of boxing is cheaper than the cost of the extra JIT.
a logger that aggregate multiple loggers
a logger that doesn't do anything
A logger that publishes events to ETW using an EventSource.
A logger that publishes events to ETW using an EventSource.
Defines a log aggregator to create a histogram
Writes out these statistics to a property bag for sending to telemetry.
The prefix given to any properties written. A period is used to delimit between the
prefix and the value.
An interaction class defines how much time is expected to reach a time point, the response
time point being the most commonly used. The interaction classes correspond to human perception,
so, for example, all interactions in the Fast class are perceived as fast and roughly feel like
they have the same performance. By defining these interaction classes, we can describe
performance using adjectives that have a precise, consistent meaning.
LogMessage that creates key value map lazily
Creates a with default , since
KV Log Messages are by default more informational and should be logged as such.
Type of log it is making.
Log some traces of an activity (default)
Log an user explicit action
Represents telemetry data that's classified as personally identifiable information.
Represents telemetry data that's classified as personally identifiable information.
This EventSource exposes our events to ETW.
RoslynEventSource GUID is {bf965e67-c7fb-5c5b-d98f-cdf68f8154c2}.
When updating this class, use the following to also update Main\Source\Test\Performance\Log\RoslynEventSourceParser.cs:
Main\Tools\Source\TraceParserGen\bin\Debug\TraceParserGen.exe Microsoft.CodeAnalysis.Workspaces.dll -eventsource:RoslynEventSource
Use this command to register the ETW manifest on any machine where you need to decode events in xperf/etlstackbrowse:
"\\clrmain\tools\managed\etw\eventRegister\bin\Debug\eventRegister.exe" Microsoft.CodeAnalysis.Workspaces.dll
Logs an informational block with given 's representation as the message
and specified .
On dispose of the returned disposable object, it logs the 'tick' count between the start and end of the block.
Unlike other logging methods on , this method does not check
if the specified was explicitly enabled.
Instead it checks if the was enabled at level.
Logs an informational message block with the given > and specified .
On dispose of the returned disposable object, it logs the 'tick' count between the start and end of the block.
Unlike other logging methods on , this method does not check
if the specified was explicitly enabled.
Instead it checks if the was enabled at level.
This tracks the logged message. On instantiation, it logs 'Started block' with other event data.
On dispose, it logs 'Ended block' with the same event data so we can track which block started and ended when looking at logs.
next unique block id that will be given to each LogBlock
return next unique pair id
maximum value
minimum value
average value of the total data set
most frequent value in the total data set
difference between max and min value
number of data points in the total data set
Writes out these statistics to a property bag for sending to telemetry.
The prefix given to any properties written. A period is used to delimit between the
prefix and the value.
Implementation of that produce timing debug output.
Implementation of that produce timing debug output.
no op log block
Enum to uniquely identify each function location.
logger interface actual logger should implements
answer whether it is enabled or not for the specific function id
log a specific event with context message
log a start event with context message
log an end event
provide a way to log activities to various back end such as etl, code marker and etc
next unique block id that will be given to each LogBlock
give a way to explicitly set/replace the logger
ensure we have a logger by putting one from workspace service if one is not there already.
log a specific event with a simple context message which should be very cheap to create
log a specific event with a context message that will only be created when it is needed.
the messageGetter should be cheap to create. in another word, it shouldn't capture any locals
log a specific event with a context message that requires some arguments to be created when requested.
given arguments will be passed to the messageGetter so that it can create the context message without requiring lifted locals
log a specific event with a context message that requires some arguments to be created when requested.
given arguments will be passed to the messageGetter so that it can create the context message without requiring lifted locals
log a specific event with a context message that requires some arguments to be created when requested.
given arguments will be passed to the messageGetter so that it can create the context message without requiring lifted locals
log a specific event with a context message that requires some arguments to be created when requested.
given arguments will be passed to the messageGetter so that it can create the context message without requiring lifted locals
log a specific event with a context message.
return next unique pair id
simplest way to log a start and end pair
simplest way to log a start and end pair with a simple context message which should be very cheap to create
log a start and end pair with a context message that will only be created when it is needed.
the messageGetter should be cheap to create. in another word, it shouldn't capture any locals
log a start and end pair with a context message that requires some arguments to be created when requested.
given arguments will be passed to the messageGetter so that it can create the context message without requiring lifted locals
log a start and end pair with a context message that requires some arguments to be created when requested.
given arguments will be passed to the messageGetter so that it can create the context message without requiring lifted locals
log a start and end pair with a context message that requires some arguments to be created when requested.
given arguments will be passed to the messageGetter so that it can create the context message without requiring lifted locals
log a start and end pair with a context message that requires some arguments to be created when requested.
given arguments will be passed to the messageGetter so that it can create the context message without requiring lifted locals
log a start and end pair with a context message.
This tracks the logged message. On instantiation, it logs 'Started block' with other event data.
On dispose, it logs 'Ended block' with the same event data so we can track which block started and ended when looking at logs.
This tracks the logged message. On instantiation, it logs 'Started block' with other event data.
On dispose, it logs 'Ended block' with the same event data so we can track which block started and ended when looking at logs.
Defines logging severity levels. Each logger may choose to report differently based on the level of the message being logged.
Copied from Microsoft.Extensions.Logging https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.loglevel
Logs that contain the most detailed messages. These messages may contain sensitive application data. These messages are disabled by default and should never be enabled in a production environment.
Logs that are used for interactive investigation during development. These logs should primarily contain information useful for debugging and have no long-term value.
Logs that track the general flow of the application. These logs should have long-term value.
Logs that highlight an abnormal or unexpected event in the application flow, but do not otherwise cause the application execution to stop.
Logs that highlight when the current flow of execution is stopped due to a failure. These should indicate a failure in the current activity, not an application-wide failure.
Logs that describe an unrecoverable application or system crash, or a catastrophic failure that requires immediate attention.
Not used for writing log messages. Specifies that a logging category should not write any messages.
log message that can generate string lazily
Logger will call this to return LogMessage to its pool
Optional interface that can be used to hear about when expensive global operations (like a 'build') occur in the
current host.
raised when global operation is started
raised when global operation is stopped
start new global operation
The of the keyword in Visual Basic, or
for C# scenarios. This value is used to improve performance in the token classification
fast-path by avoiding unnecessary calls to .
Service which can analyze a span of a document and identify all locations of declarations or references to
symbols which are marked .
An that comes from . It behaves just like a normal
but remembers which language the is, so you don't have to
pass that information redundantly when calling .
Cached internal values read from or .
Creates a new that contains the changed value.
Gets the value of the option, or the default value if not otherwise set.
Gets the value of the option, or the default value if not otherwise set.
Gets the value of the option, or the default value if not otherwise set.
Creates a new that contains the changed value.
Gets the value of the option, or the default value if not otherwise set.
Creates a new that contains the changed value.
Creates a new that contains the changed value.
Checks if the value is an internal representation -- does not cover all cases, just code style options.
Checks if the value is an public representation -- does not cover all cases, just code style options.
Provides services for reading and writing global client (in-proc) options
shared across all workspaces.
Gets the current value of the specific option.
Gets the current value of the specific option.
Gets the current value of the specific option.
Gets the current values of specified options.
All options are read atomically.
Sets and persists the value of a global option.
Sets the value of a global option.
Invokes registered option persisters.
Triggers option changed event for handlers registered with .
Atomically sets the values of specified global options. The option values are persisted.
Triggers option changed event for handlers registered with .
Returns true if any option changed its value stored in the global options.
Refreshes the stored value of an option. This should only be called from persisters.
Does not persist the new option value.
Returns true if the option changed its value stored in the global options.
Enables legacy APIs to access global options from workspace.
Not available OOP. Only use in client code and when IGlobalOptionService can't be MEF imported.
Only used by and to implement legacy public APIs:
and .
Exportable by a host to specify the save and restore behavior for a particular set of
values.
Gets the . If the persister does not already exist, it is created.
This method is safe for concurrent use from any thread. No guarantees are made regarding the use of the UI
thread.
A cancellation token the operation may observe.
The option persister.
Stores options that are not defined by Roslyn and do not implement .
Sets values of options that may be stored in (public options).
Clears of registered workspaces so that next time
are queried for the options new values are fetched from
.
The base type of all types that specify where options are stored.
The name of the providers for .editorconfig. Both the current and legacy providers will use this name, so that way any other clients can
order relative to the pair. The two factories are unordered themselves because only one ever actually gives a real provider.
Implements in-proc only storage for .
Supports tracking changed options.
Options that are not set in the option set are read from global options and cached.
Cached values read from global options. Stores internal values of options.
Keys of options whose current value stored in differs from the value originally read from global options.
Some options store their values in a type that's not accessible publicly.
The mapping provides translation between the two representations.
Some options store their values in a type that's not accessible publicly.
The mapping provides translation between the two representations.
The option that stores the value internally.
Converts internal option value representation to public.
Returns a new internal value created by updating to .
Interface implemented by public options (Option and PerLanguageOption)
to distinguish them from internal ones ( and ).
Creates a serializer for an enum value that uses the enum field names.
Creates a serializer for an enum value given a between value names and the corresponding enum values.
Creates a serializer for an enum value given a between value names and the corresponding enum values.
specifies alternative value representations for backward compatibility.
Creates a serializer for an enum value given a between value names and the corresponding enum values.
specifies alternative value representations for backward compatibility.
Serializes arbitrary editorconfig option value (including naming style preferences) into a given builder.
Replaces existing value if present.
Specifies that an option should be read from an .editorconfig file.
Specifies that an option should be read from an .editorconfig file.
Gets the editorconfig string representation for the specified .
Internal base option type that is available in both the Workspaces layer and CodeStyle layer.
Its definition in Workspaces layer sub-types "IOption" and its definition in CodeStyle layer
explicitly defines all the members from "IOption" type as "IOption" is not available in CodeStyle layer.
This ensures that all the sub-types of in either layer see an identical
set of interface members.
Group/sub-feature associated with an option.
Group/sub-feature associated with an option.
Optional parent group.
A localizable resource description string for the option group.
Name of the option group
Relative priority of the option group with respect to other option groups within the same feature.
Optional group/sub-feature for this option.
A unique name of the option used in editorconfig.
True if the value of the option may be stored in an editorconfig file.
Mapping between the public option storage and internal option storage.
The untyped/boxed default value of the option.
The type of the option value.
Marker interface for options that has the same value for all languages.
The language name that supports this option, or null if it's supported by multiple languages.
This is an optional metadata used for:
- Analyzer id to option mapping, used (for example) by configure code-style code action
- EditorConfig UI to determine whether to put this option under [*.cs], [*.vb], or [*.{cs,vb}]
Note that this property is not (and should not be) used for computing option values or storing options.
Marker interface for .
This option may apply to multiple languages, such that the option can have a different value for each language.
An option that can be specified once per language.
Attempts to get the package sources applicable to the workspace. Note: this call is made on a best effort
basis. If the results are not available (for example, they have not been computed, and doing so would
require switching to the UI thread), then an empty array can be returned.
A collection of package sources.
The pattern matcher is not thread-safe. Do not use the pattern matcher across mutiple threads concurrently. It
also keeps an internal cache of data for speeding up operations. As such, it should be disposed when done to
release the cached data back. and release the matcher appropriately once you no longer need it. Also, while the
pattern matcher is culture aware, it uses the culture specified in the constructor.
Encapsulated matches responsible for matching an all lowercase pattern against
a candidate using CamelCase matching. i.e. this code is responsible for finding the
match between "cofipro" and "CodeFixProvider".
Encapsulated matches responsible for matching an all lowercase pattern against
a candidate using CamelCase matching. i.e. this code is responsible for finding the
match between "cofipro" and "CodeFixProvider".
Returns null if no match was found, 1 if a contiguous match was found, 2 if a
match as found that starts at the beginning of the candidate, and 3 if a contiguous
match was found that starts at the beginning of the candidate.
Updates the currently stored 'best result' if the current result is better.
Returns 'true' if no further work is required and we can break early, or
'false' if we need to keep on going.
If 'weight' is better than 'bestWeight' and matchSpanToAdd is not null, then
matchSpanToAdd will be added to matchedSpansInReverse.
Construct a new PatternMatcher using the specified culture.
The culture to use for string searching and comparison.
Whether or not the matching parts of the candidate should be supplied in results.
Whether or not close matches should count as matches.
Internal helper for MatchPatternInternal
PERF: Designed to minimize allocations in common cases.
If there's no match, then null is returned.
If there's a single match, or the caller only wants the first match, then it is returned (as a Nullable)
If there are multiple matches, and the caller wants them all, then a List is allocated.
The word being tested.
The segment of the pattern to check against the candidate.
The result array to place the matches in.
If a fuzzy match should be performed
If there's only one match, then the return value is that match. Otherwise it is null.
Do the two 'parts' match? i.e. Does the candidate part start with the pattern part?
The candidate text
The span within the text
The pattern text
The span within the text
Options for doing the comparison (case sensitive or not)
True if the span identified by within starts with
the span identified by within .
Does the given part start with the given pattern?
The candidate text
The span within the text
The pattern text
Options for doing the comparison (case sensitive or not)
True if the span identified by within starts with
First we break up the pattern given by dots. Each portion of the pattern between the
dots is a 'Segment'. The 'Segment' contains information about the entire section of
text between the dots, as well as information about any individual 'Words' that we
can break the segment into.
First we break up the pattern given by dots. Each portion of the pattern between the
dots is a 'Segment'. The 'Segment' contains information about the entire section of
text between the dots, as well as information about any individual 'Words' that we
can break the segment into.
Information about a chunk of text from the pattern. The chunk is a piece of text, with
cached information about the character spans within in. Character spans separate out
capitalized runs and lowercase runs. i.e. if you have AAbb, then there will be two
character spans, one for AA and one for BB.
Character spans separate out
capitalized runs and lowercase runs. i.e. if you have AAbb, then there will be two
character spans, one for AA and one for BB.
Not readonly as this value caches data within it, and so it needs to be able to mutate.
Determines if a given candidate string matches under a multiple word query text, as you
would find in features like Navigate To.
If this was a match, a set of match types that occurred while matching the
patterns. If it was not a match, it returns null.
The type of match that occurred.
True if this was a case sensitive match.
The spans in the original text that were matched. Only returned if the
pattern matcher is asked to collect these spans.
Note(cyrusn): this enum is ordered from strongest match type to weakest match type.
The candidate string matched the pattern exactly.
The pattern was a prefix of the candidate string.
The pattern was a substring of the candidate string, but in a way that wasn't a CamelCase match. The
pattern had to have at least one non lowercase letter in it, and the match needs to be case sensitive.
This will match 'savedWork' against 'FindUnsavedWork'.
The pattern was a substring of the candidate string, starting at a word within that candidate. The pattern
can be all lowercase here. This will match 'save' or 'Save' in 'FindSavedWork'
All camel-humps in the pattern matched a camel-hump in the candidate. All camel-humps
in the candidate were matched by a camel-hump in the pattern.
Example: "CFPS" matching "CodeFixProviderService"
Example: "cfps" matching "CodeFixProviderService"
Example: "CoFiPrSe" matching "CodeFixProviderService"
All camel-humps in the pattern matched a camel-hump in the candidate. The first camel-hump
in the pattern matched the first camel-hump in the candidate. There was no gap in the camel-
humps in the candidate that were matched.
Example: "CFP" matching "CodeFixProviderService"
Example: "cfp" matching "CodeFixProviderService"
Example: "CoFiPRo" matching "CodeFixProviderService"
All camel-humps in the pattern matched a camel-hump in the candidate. The first camel-hump
in the pattern matched the first camel-hump in the candidate. There was at least one gap in
the camel-humps in the candidate that were matched.
Example: "CP" matching "CodeFixProviderService"
Example: "cp" matching "CodeFixProviderService"
Example: "CoProv" matching "CodeFixProviderService"
All camel-humps in the pattern matched a camel-hump in the candidate. The first camel-hump
in the pattern did not match the first camel-hump in the pattern. There was no gap in the camel-
humps in the candidate that were matched.
Example: "FP" matching "CodeFixProviderService"
Example: "fp" matching "CodeFixProviderService"
Example: "FixPro" matching "CodeFixProviderService"
All camel-humps in the pattern matched a camel-hump in the candidate. The first camel-hump
in the pattern did not match the first camel-hump in the pattern. There was at least one gap in
the camel-humps in the candidate that were matched.
Example: "FS" matching "CodeFixProviderService"
Example: "fs" matching "CodeFixProviderService"
Example: "FixSer" matching "CodeFixProviderService"
The pattern matches the candidate in a fuzzy manner. Fuzzy matching allows for
a certain amount of misspellings, missing words, etc. See for
more details.
The pattern was a substring of the candidate and wasn't either or . This can happen when the pattern is allow lowercases and matches some non
word portion of the candidate. For example, finding 'save' in 'GetUnsavedWork'. This will not match across
word boundaries. i.e. it will not match 'save' to 'VisaVerify' even though 'saVe' is in that candidate.
Represents the progress of an operation. Commonly used to update a UI visible to a user when a long running
operation is happening.
Used when bridging from an api that does not show progress to the user to an api that can update progress if
available. This should be used sparingly. Locations that currently do not show progress should ideally be
migrated to ones that do so that long running operations are visible to the user in a coherent fashion.
When passed to an appropriate , will updates the UI showing the progress of the
current operation to the specified .
progress.Report(CodeAnalysisProgress.Description("Renaming files"));
When passed to an appropriate , will add the requested number of incomplete items to
the UI showing the progress of the current operation. This is commonly presented with a progress bar. An
optional can also be provided to update the UI accordingly (see ).
The number of incomplete items left to perform.
Optional description to update the UI to.
progress.Report(CodeAnalysisProgress.AddIncompleteItems(20));
When passed to an appropriate , will indicate that some items of work have
transitioned from being incomplete (see to complete. This is commonly
presented with a progress bar. An optional can also be provided to update the UI
accordingly (see ).
The number of items that were completed. Must be greater than or equal to 1.
Optional description to update the UI to.
progress.Report(CodeAnalysisProgress.CompleteItem());
When passed to an appropriate , will indicate that all progress should be reset for
the current operation. This is normally done when the code action is performing some new phase and wishes for
the UI progress bar to restart from the beginning.
Currently internal as only roslyn needs this in the impl of our suggested action (we use a progress bar to
compute the work, then reset the progress to apply all the changes). Could be exposed later to 3rd party code
if a demonstrable need is presented.
Service which can analyze a span of a document and identify all locations of parameters or locals that are ever
reassigned. Note that the locations provided are not the reassignment points. Rather if a local or parameter
is ever reassigned, these are all the locations of those locals or parameters within that span.
Tries to get a type of its' lambda parameter of argument for each candidate symbol.
symbols corresponding to or
Here, some_args can be multi-variables lambdas as well, e.g. f((a,b) => a+b, (a,b,c)=>a*b*c.Length)
ordinal of the arguments of function: (a,b) or (a,b,c) in the example above
ordinal of the lambda parameters, e.g. a, b or c.
If container is a tuple type, any of its tuple element which has a friendly name will cause the suppression
of the corresponding default name (ItemN). In that case, Rest is also removed.
The named symbols to recommend.
The unnamed symbols to recommend. For example, operators, conversions and indexers.
Returns a that a user can use to communicate with a remote host (i.e. ServiceHub)
Get to current RemoteHost
Allows a caller to wait until the remote host client is is first create, without itself kicking off the work to
spawn the remote host and make the client itself.
Token signaled when the host starts to shut down.
Keeps alive this solution in the OOP process until the cancellation token is triggered. Used so that long
running features (like 'inline rename' or 'lightbulbs') we can call into oop several times, with the same
snapshot, knowing that things will stay hydrated and alive on the OOP side. Importantly, by keeping the
same snapshot alive on the OOP side, computed attached values (like s) will stay alive as well.
Creates a session between the host and OOP, effectively pinning this until is called on it. By pinning the solution we ensure that all calls to OOP for
the same solution during the life of this session do not need to resync the solution. Nor do they then need
to rebuild any compilations they've already built due to the solution going away and then coming back.
The is not strictly necessary for this type. This class functions just as an
optimization to hold onto data so it isn't resync'ed or recomputed. However, we still want to let the
system know when unobserved async work is kicked off in case we have any tooling that keep track of this for
any reason (for example for tracking down problems in testing scenarios).
This synchronous entrypoint should be used only in contexts where using the async is not possible (for example, in a constructor).
Creates a session between the host and OOP, effectively pinning this until is called on it. By pinning the solution we ensure that all calls to OOP for
the same solution during the life of this session do not need to resync the solution. Nor do they then need
to rebuild any compilations they've already built due to the solution going away and then coming back.
This represents client in client/server model.
user can create a connection to communicate with the server (remote host) through this client
Equivalent to
except that only the project (and its dependent projects) will be sync'ed to the remote host before executing.
This is useful for operations that don't every do any work outside of that project-cone and do not want to pay
the high potential cost of a full sync.
Equivalent to
except that only the project (and its dependent projects) will be sync'ed to the remote host before executing.
This is useful for operations that don't every do any work outside of that project-cone and do not want to pay
the high potential cost of a full sync.
Equivalent to
except that only the project (and its dependent projects) will be sync'ed to the remote host before executing.
This is useful for operations that don't every do any work outside of that project-cone and do not want to pay
the high potential cost of a full sync.
Equivalent to
except that only the project (and its dependent projects) will be sync'ed to the remote host before executing.
This is useful for operations that don't every do any work outside of that project-cone and do not want to pay
the high potential cost of a full sync.
Client-side object that is called back from the server when options for a certain language are required.
Can be used when the remote API does not have an existing callback. If it does it can implement
itself.
Client-side object that is called back from the server when options for a certain language are required.
Can be used when the remote API does not have an existing callback. If it does it can implement
itself.
Abstracts a connection to a service implementing type .
Remote interface type of the service.
Given two solution snapshots ( and ), determines
the set of document text changes necessary to convert to .
Applies the result of to to produce
a solution textually equivalent to the newSolution passed to .
Enables logging of using loggers of the specified .
Initializes telemetry session.
Sets for the process.
Called as soon as the remote process is created but can't guarantee that solution entities (projects, documents, syntax trees) have not been created beforehand.
Process ID of the remote process.
This deal with serializing/deserializing language specific data
Interface for services that support dumping their contents to memory-mapped-files (generally speaking, our assembly
reference objects). This allows those objects to expose the memory-mapped-file info needed to read that data back
in in any process.
Represents a which can be serialized for sending to another process. The text is not
required to be a live object in the current process, and can instead be held in temporary storage accessible by
both processes.
The storage location for .
Exactly one of or will be non-.
The in the current process.
Weak reference to a SourceText computed from . Useful so that if multiple requests
come in for the source text, the same one can be returned as long as something is holding it alive.
Checksum of the contents (see ) of the text.
Returns the strongly referenced SourceText if we have it, or tries to retrieve it from the weak reference if
it's still being held there.
A that wraps a and provides access to the text in
a deferred fashion. In practice, during a host and OOP sync, while all the documents will be 'serialized' over
to OOP, the actual contents of the documents will only need to be loaded depending on which files are open, and
thus what compilations and trees are needed. As such, we want to be able to lazily defer actually getting the
contents of the text until it's actually needed. This loader allows us to do that, allowing the OOP side to
simply point to the segments in the memory-mapped-file the host has dumped its text into, and only actually
realizing the real text values when they're needed.
Documents should always hold onto instances of this text loader strongly. In other words, they should load
from this, and then dump the contents into a RecoverableText object that then dumps the contents to a memory
mapped file within this process. Doing that is pointless as the contents of this text are already in a
memory mapped file on the host side.
serialize and deserialize objects to stream.
some of these could be moved into actual object, but putting everything here is a bit easier to find I believe.
Allow analyzer tests to exercise the oop codepaths, even though they're referring to in-memory instances of
DiagnosticAnalyzers. In that case, we'll just share the in-memory instance of the analyzer across the OOP
boundary (which still runs in proc in tests), but we will still exercise all codepaths that use the RemoteClient
as well as exercising all codepaths that send data across the OOP boundary. Effectively, this allows us to
pretend that a is a during tests.
Required information passed with an asset synchronization request to tell the host where to scope the request to. In
particular, this is often used to scope to a particular or to avoid
having to search the entire solution.
Special instance, allowed only in tests/debug-asserts, that can do a full lookup across the entire checksum
tree. Should not be used in normal release-mode product code.
If not null, the search should only descend into the single project with this id.
If not null, the search should only descend into the single document with this id.
Searches only for information about this project.
Searches only for information about this document.
Searches the requested project, and all documents underneath it. Used only in tests.
Searches all documents within the specified project.
Search solution-compilation-state level information.
Search solution-state level information.
Search projects for results. All project-level information will be searched.
Search documents for results.
A wrapper around an array of s, which also combines the value into a
single aggregate checksum exposed through .
A wrapper around an array of s, which also combines the value into a
single aggregate checksum exposed through .
Aggregate checksum produced from all the constituent checksums in .
Enumerates the child checksums (found in ) that make up this collection. This is
equivalent to directly enumerating the property. Importantly, is
not part of this enumeration. is the checksum produced by all those child
checksums.
A paired list of s, and the checksums for their corresponding 's .
A paired list of s, and the checksums for their corresponding 's and checksums.
Checksums of the SourceTexts of the frozen documents directly. Not checksums of their DocumentStates.
The particular if this was a checksum tree made for a particular
project cone.
The particular if this was a checksum tree made for a particular
project cone.
hold onto object checksum that currently doesn't have a place to hold onto checksum
The core data structure of the tracker. This is a dictionary of variable name to the
current identifier tokens that are declaring variables. This should only ever be updated
via the AddIdentifier and RemoveIdentifier helpers.
Performs the renaming of the symbol in the solution, identifies renaming conflicts and automatically
resolves them where possible.
The new name of the identifier
Used after renaming references. References that now bind to any of these
symbols are not considered to be in conflict. Useful for features that want to rename existing references to
point at some existing symbol. Normally this would be a conflict, but this can be used to override that
behavior.
The cancellation token.
A conflict resolution containing the new solution.
Finds any conflicts that would arise from using as the new name for a
symbol and returns how to resolve those conflicts. Will not cross any process boundaries to do this.
Used to find the symbols associated with the Invocation Expression surrounding the Token
Computes an adds conflicts relating to declarations, which are independent of
location-based checks. Examples of these types of conflicts include renaming a member to
the same name as another member of a type: binding doesn't change (at least from the
perspective of find all references), but we still need to track it.
Gives the First Location for a given Symbol by ordering the locations using DocumentId first and Location starting position second
Helper class to track the state necessary for finding/resolving conflicts in a
rename session.
Find conflicts in the new solution
Gets the list of the nodes that were annotated for a conflict check
The method determines the set of documents that need to be processed for Rename and also determines
the possible set of names that need to be checked for conflicts.
The list will contains Strings like Bar -> BarAttribute ; Property Bar -> Bar , get_Bar, set_Bar
We try to rewrite all locations that are invalid candidate locations. If there is only
one location it must be the correct one (the symbol is ambiguous to something else)
and we always try to rewrite it. If there are multiple locations, we only allow it
if the candidate reason allows for it).
We try to compute the sub-spans to rename within the given .
If we are renaming within a string, the locations to rename are always within this containing string location
and we can identify these sub-spans.
However, if we are renaming within a comment, the rename locations can be anywhere in trivia,
so we return null and the rename rewriter will perform a complete regex match within comment trivia
and rename all matches instead of specific matches.
The result of the conflict engine. Can be made immutable by calling .
The result of the conflict engine. Can be made immutable by calling .
The base workspace snapshot
Whether the text that was resolved with was even valid. This may be false if the
identifier was not valid in some language that was involved in the rename.
The original text that is the rename replacement.
The solution snapshot as it is being updated with specific rename steps.
Gives information about an identifier span that was affected by Rename (Reference or Non reference)
The Span of the original identifier if it was in source, otherwise the span to check for implicit
references.
If there was a conflict at ConflictCheckSpan during rename, then the next phase in rename uses
ComplexifiedTargetSpan span to be expanded to resolve the conflict.
Gives information about an identifier span that was affected by Rename (Reference or Non reference)
The Span of the original identifier if it was in source, otherwise the span to check for implicit
references.
If there was a conflict at ConflictCheckSpan during rename, then the next phase in rename uses
ComplexifiedTargetSpan span to be expanded to resolve the conflict.
The Span of the original identifier if it was in source, otherwise the span to check for implicit
references.
If there was a conflict at ConflictCheckSpan during rename, then the next phase in rename uses
ComplexifiedTargetSpan span to be expanded to resolve the conflict.
There was no conflict.
A conflict was resolved at a location that references the symbol being renamed.
A conflict was resolved in a piece of code that does not reference the symbol being
renamed.
There was a conflict that could not be resolved.
These are the conflicts that cannot be resolved. E.g.: Declaration Conflict
Tracks the text spans that were modified as part of a rename operation
Information to track deltas of complexified spans
Consider the following example where renaming a->b causes a conflict
and Goo is an extension method:
"a.Goo(a)" is rewritten to "NS1.NS2.Goo(NS3.a, NS3.a)"
The OriginalSpan is the span of "a.Goo(a)"
The NewSpan is the span of "NS1.NS2.Goo(NS3.a, NS3.a)"
The ModifiedSubSpans are the pairs of complexified symbols sorted
according to their order in the original source code span:
"a", "NS3.a"
"Goo", "NS1.NS2.Goo"
"a", "NS3.a"
Information to track deltas of complexified spans
Consider the following example where renaming a->b causes a conflict
and Goo is an extension method:
"a.Goo(a)" is rewritten to "NS1.NS2.Goo(NS3.a, NS3.a)"
The OriginalSpan is the span of "a.Goo(a)"
The NewSpan is the span of "NS1.NS2.Goo(NS3.a, NS3.a)"
The ModifiedSubSpans are the pairs of complexified symbols sorted
according to their order in the original source code span:
"a", "NS3.a"
"Goo", "NS1.NS2.Goo"
"a", "NS3.a"
This annotation will be used by rename to mark all places where it needs to rename an identifier (token replacement) and where to
check if the semantics have been changes (conflict detection).
This annotation should be put on tokens only.
This annotation will be used by rename to mark all places where it needs to rename an identifier (token replacement) and where to
check if the semantics have been changes (conflict detection).
This annotation should be put on tokens only.
The span this token occupied in the original syntax tree. Can be used to show e.g. conflicts in the UI.
A flag indicating whether this is a location that needs to be renamed or just tracked for conflicts.
A flag indicating whether the token at this location has the same ValueText then the original name
of the symbol that gets renamed.
When replacing the annotated token this string will be prepended to the token's value. This is used when renaming compiler
generated fields and methods backing properties (e.g. "get_X" or "_X" for property "X").
When replacing the annotated token this string will be appended to the token's value. This is used when renaming compiler
generated types whose names are derived from user given names (e.g. "XEventHandler" for event "X").
A single dimensional array of annotations to verify after rename.
States if this token is a Namespace Declaration Reference
States if this token is a member group reference, typically found in NameOf expressions
States if this token is annotated as a part of the Invocation Expression that needs to be checked for the Conflicts
This class is used to refer to a Symbol definition which could be in source or metadata
it has a metadata name.
The metadata name for this symbol.
Count of symbol location (Partial Types, Constructors, etc).
A flag indicating that the associated symbol is an override of a symbol from metadata
A flag indicate if the rename operation is successful or not.
If this is false, the would be with this resolution. All the other field or property would be or empty.
If this is true, the would be null. All the other fields or properties would be valid.
The final solution snapshot. Including any renamed documents.
The list of all document ids of documents that have been touched for this rename operation.
Runs the entire rename operation OOP and returns the final result. More efficient (due to less back and
forth marshaling) when the intermediary results of rename are not needed. To get the individual parts of
rename remoted use and .
Equivalent to except that references to symbols are kept in a lightweight fashion
to avoid expensive rehydration steps as a host and OOP communicate.
Find the locations that need to be renamed. Can cross process boundaries efficiently to do this.
Holds the Locations of a symbol that should be renamed, along with the symbol and Solution for the set. It is
considered 'heavy weight' because it holds onto large entities (like Symbols) and thus should not be marshaled
to/from a host to OOP.
A helper class that contains some of the methods and filters that must be used when
processing the raw results from the FindReferences API.
Attempts to find all the locations to rename. Will not cross any process boundaries to do this.
Given a ISymbol, returns the renameable locations for a given symbol.
This method annotates the given syntax tree with all the locations that need to be checked for conflict
after the rename operation. It also renames all the reference locations and expands any conflict locations.
The options describing this rename operation
The root of the annotated tree.
Based on the kind of the symbol and the new name, this function determines possible conflicting names that
should be tracked for semantic changes during rename.
The symbol that gets renamed.
The new name for the symbol.
List where possible conflicting names will be added to.
Identifies the conflicts caused by the new declaration created during rename.
The replacementText as given from the user.
The new symbol (after rename).
The original symbol that got renamed.
All referenced symbols that are part of this rename session.
The original solution when rename started.
The resulting solution after rename.
A mapping from new to old locations.
The cancellation token.
All locations where conflicts were caused because the new declaration.
Identifies the conflicts caused by implicitly referencing the renamed symbol.
The original symbol that got renamed.
The new symbol (after rename).
All implicit reference locations.
The cancellation token.
A list of implicit conflicts.
Identifies the conflicts caused by implicitly referencing the renamed symbol.
The new symbol (after rename).
The SemanticModel of the document in the new solution containing the renamedSymbol
The location of the renamedSymbol in the old solution
The starting position of the renamedSymbol in the new solution
The cancellation token.
A list of implicit conflicts.
Identifies potential Conflicts into the inner scope locals. This may give false positives.
The Token that may introduce errors else where
The symbols that this token binds to after the rename
has been applied
Returns if there is a potential conflict
Used to find if the replacement Identifier is valid
Gets the top most enclosing statement as target to call MakeExplicit on.
It's either the enclosing statement, or if this statement is inside of a lambda expression, the enclosing
statement of this lambda.
The token to get the complexification target for.
mentions that the result is for the base symbol of the rename
mentions that the result is for the overloaded symbols of the rename
Options for renaming a symbol.
If the symbol is a method rename its overloads as well.
Rename identifiers in string literals that match the name of the symbol.
Rename identifiers in comments that match the name of the symbol.
If the symbol is a type renames the file containing the type declaration as well.
Options for renaming a symbol.
If the symbol is a method rename its overloads as well.
Rename identifiers in string literals that match the name of the symbol.
Rename identifiers in comments that match the name of the symbol.
If the symbol is a type renames the file containing the type declaration as well.
If the symbol is a method rename its overloads as well.
Rename identifiers in string literals that match the name of the symbol.
Rename identifiers in comments that match the name of the symbol.
If the symbol is a type renames the file containing the type declaration as well.
Options for renaming a document.
If the document contains a type declaration with matching name rename identifiers in strings that match the name as well.
If the document contains a type declaration with matching name rename identifiers in comments that match the name as well.
Options for renaming a document.
If the document contains a type declaration with matching name rename identifiers in strings that match the name as well.
If the document contains a type declaration with matching name rename identifiers in comments that match the name as well.
If the document contains a type declaration with matching name rename identifiers in strings that match the name as well.
If the document contains a type declaration with matching name rename identifiers in comments that match the name as well.
Call to perform a rename of document or change in document folders. Returns additional code changes related to the document
being modified, such as renaming symbols in the file.
Each change is added as a in the returned .
Each action may individually encounter errors that prevent it from behaving correctly. Those are reported in .
Current supported actions that may be returned:
- Rename symbol action that will rename the type to match the document name.
- Sync namespace action that will sync the namespace(s) of the document to match the document folders.
The document to be modified
The new name for the document. Pass null or the same name to keep unchanged.
Options used to configure rename of a type contained in the document that matches the document's name.
The new set of folders for the property
Individual action from RenameDocument APIs in . Represents
changes that will be done to one or more document contents to help facilitate
a smooth experience while moving documents around.
See on use case and how to apply them to a solution.
Get any errors that have been noted for this action before it is applied.
Can be used to present to a user.
Gets the description of the action. Can be used to present to a user to describe
what extra actions will be taken.
Information about rename document calls that allows them to be applied as individual actions. Actions are individual units of work
that can change the contents of one or more document in the solution. Even if the is empty, the
document metadata will still be updated by calling
To apply all actions use , or use a subset
of the actions by calling .
Actions can be applied in any order.
Each action has a description of the changes that it will apply that can be presented to a user.
All applicable actions computed for the action. Action set may be empty, which represents updates to document
contents rather than metadata. Document metadata will still not be updated unless
is called.
Same as calling with
as the argument
Applies each in order and returns the final solution.
All actions must be contained in
An empty action set is still allowed and will return a modified solution
that will update the document properties as appropriate. This means we
can still support when is empty. It's desirable
that consumers can call a rename API to produce a and
immediately call without
having to inspect the returned .
Attempts to find the document in the solution. Tries by documentId first, but
that's not always reliable between analysis and application of the rename actions
Action that will rename a type to match the current document name. Works by finding a type matching the origanl name of the document (case insensitive)
and updating that type.
Finds a matching type such that the display name of the type matches the name passed in, ignoring case. Case isn't used because
documents with name "Foo.cs" and "foo.cs" should still have the same type name
Name of the document that the action was produced for.
The new document name that will be used.
The original name of the symbol that will be changed.
The new name for the symbol.
Action that will sync the namespace of the document to match the folders property
of that document, similar to if a user performed the "Sync Namespace" code refactoring.
For example, if a document is moved from "Bat/Bar/Baz" folder structure to "Bat/Bar/Baz/Bat" and contains
a namespace definition of Bat.Bar.Baz in the document, then it would update that definition to
Bat.Bar.Baz.Bat and update the solution to reflect these changes. Uses
Renaming a private symbol typically confines the set of references and potential
conflicts to that symbols declaring project. However, rename may cascade to
non-public symbols which may then require other projects be considered.
Given a symbol in a document, returns the "right" symbol that should be renamed in
the case the name binds to things like aliases _and_ the underlying type at once.
Given a symbol, finds the symbol that actually defines the name that we're using.
Interface only for use by . Includes language specific
implementations on how to get an appropriate speculated semantic model given an older semantic model and a
changed method body.
Given a node, returns the parent method-body-esque node that we can get a new speculative semantic model
for. Returns if not in such a location.
Given a previous semantic model, and a method-eque node in the current tree for that same document, attempts
to create a new speculative semantic model using the top level symbols of but the new body level symbols produced for .
Note: it is critical that no top level changes have occurred between the syntax tree that points at and the syntax tree that points
at. In other words, they must be (..., topLevel: true). This
function is undefined if they are not.
The original non-speculative semantic model we retrieved for this document at some point.
The current semantic model we retrieved for the . Could
be speculative or non-speculative.
The current method body we retrieved the for.
The top level version of the project when we retrieved . As long as this is the
same we can continue getting speculative models to use.
A mapping from a document id to the last semantic model we produced for it, along with the top level
semantic version that that semantic model corresponds to. We can continue reusing the semantic model as
long as no top level changes occur.
In general this dictionary will only contain a single key-value pair. However, in the case of linked
documents, there will be a key-value pair for each of the independent document links that a document
has.
A value simply means we haven't cached any information for that particular id.
a service that provides a semantic model that will re-use last known compilation if
semantic version hasn't changed.
Don't call this directly. use (or an overload).
Attempts to return an speculative semantic model for if possible if is contained within a method body in the tree. Specifically, this will attempt to get an
existing cached semantic model for . If it can find one, and the top-level semantic
version for this project matches the cached version, and the position is within a method body, then it will
be returned, just with the previous corresponding method body swapped out with the current method body.
If this is not possible, the regular semantic model for will be returned.
When using this API, semantic model should only be used to ask questions about nodes inside of the member
that contains the given .
As a speculative semantic model may be returned, location based information provided by it may be innacurate.
Attempts to return an speculative semantic model for if possible if is contained within a method body in the tree. Specifically, this will attempt to get an
existing cached semantic model . If it can find one, and the top-level semantic
version for this project matches the cached version, and the position is within a method body, then it will
be returned, just with the previous corresponding method body swapped out with the current method body.
If this is not possible, the regular semantic model for will be returned.
When using this API, semantic model should only be used to ask questions about nodes inside of the
member that contains the given .
As a speculative semantic model may be returned, location based information provided by it may be innacurate.
Attempts to return an speculative semantic model for if possible if is contained within a method body in the tree. Specifically, this will attempt to get an
existing cached semantic model . If it can find one, and the top-level semantic
version for this project matches the cached version, and the position is within a method body, then it will
be returned, just with the previous corresponding method body swapped out with the current method body.
If this is not possible, the regular semantic model for will be returned.
When using this API, semantic model should only be used to ask questions about nodes inside of the
member that contains the given .
As a speculative semantic model may be returned, location based information provided by it may be innacurate.
Returns a new based off of the positions in , but
which is guaranteed to fall entirely within the span of .
Returns a new based off of the positions in , but
which is guaranteed to fall entirely within the span of .
Returns the methodSymbol and any partial parts.
Returns true for void returning methods with two parameters, where
the first parameter is of type and the second
parameter inherits from or equals type.
Tells if an async method returns a task-like type, awaiting for which produces result
Gets the set of members in the inheritance chain of that
are overridable. The members will be returned in furthest-base type to closest-base
type order. i.e. the overridable members of will be at the start
of the list, and the members of the direct parent type of
will be at the end of the list.
If a member has already been overridden (in or any base type)
it will not be included in the list.
Searches the namespace for namespaces with the provided name.
Checks a given symbol for browsability based on its declaration location, attributes explicitly limiting
browsability, and whether showing of advanced members is enabled. The optional editorBrowsableInfo parameters
may be used to specify the symbols of the constructors of the various browsability limiting attributes because
finding these repeatedly over a large list of symbols can be slow. If these are not provided, they will be found
in the compilation.
First, remove symbols from the set if they are overridden by other symbols in the set.
If a symbol is overridden only by symbols outside of the set, then it is not removed.
This is useful for filtering out symbols that cannot be accessed in a given context due
to the existence of overriding members. Second, remove remaining symbols that are
unsupported (e.g. pointer types in VB) or not editor browsable based on the EditorBrowsable
attribute.
Returns if the signature of this symbol requires the modifier. For example a method that takes List<int*[]>
is unsafe, as is int* Goo { get; }. This will return for
symbols that cannot have the modifier on them.
Returns true if symbol is a local variable and its declaring syntax node is
after the current position, false otherwise (including for non-local symbols)
If the is a method symbol, returns if the method's return type is "awaitable", but not if it's .
If the is a type symbol, returns if that type is "awaitable".
An "awaitable" is any type that exposes a GetAwaiter method which returns a valid "awaiter". This GetAwaiter method may be an instance method or an extension method.
Returns true for symbols whose name starts with an underscore and
are optionally followed by an integer or other underscores, such as '_', '_1', '_2', '__', '___', etc.
These are treated as special discard symbol names.
Returns , if the symbol is marked with the .
if the symbol is marked with the .
Visits types or members that have signatures (i.e. methods, fields, etc.) and determines
if any of them reference a pointer type and should thus have the modifier on them.
Checks if 'symbol' is accessible from within 'within'.
Checks if 'symbol' is accessible from within assembly 'within'.
Checks if 'symbol' is accessible from within name type 'within', with an optional
qualifier of type "throughTypeOpt".
Checks if 'symbol' is accessible from within assembly 'within', with an qualifier of
type "throughTypeOpt". Sets "failedThroughTypeCheck" to true if it failed the "through
type" check.
Checks if 'symbol' is accessible from within 'within', which must be a INamedTypeSymbol
or an IAssemblySymbol. If 'symbol' is accessed off of an expression then
'throughTypeOpt' is the type of that expression. This is needed to properly do protected
access checks. Sets "failedThroughTypeCheck" to true if this protected check failed.
Returns the corresponding symbol in this type or a base type that implements
interfaceMember (either implicitly or explicitly), or null if no such symbol exists
(which might be either because this type doesn't implement the container of
interfaceMember, or this type doesn't supply a member that successfully implements
interfaceMember).
Like Span, except it has a start/end line instead of a start/end position.
Inclusive
Exclusive
Like Span, except it has a start/end line instead of a start/end position.
Inclusive
Exclusive
Inclusive
Exclusive
Gets extended host language services, which includes language services from .
Acquires a lease on a safe handle. The lease increments the reference count of the
to ensure the handle is not released prior to the lease being released.
This method is intended to be used in the initializer of a using statement. Failing to release the
lease will permanently prevent the underlying from being released by the garbage
collector.
The to lease.
A , which must be disposed to release the resource.
If the lease could not be acquired.
Represents a lease of a .
Releases the lease. The behavior of this method is unspecified if called more than
once.
Gets semantic information, such as type, symbols, and diagnostics, about the parent of a token.
The SemanticModel object to get semantic information
from.
The token to get semantic information from. This must be part of the
syntax tree associated with the binding.
A cancellation token.
Fetches the ITypeSymbol that should be used if we were generating a parameter or local that would accept . If
expression is a type, that's returned; otherwise this will see if it's something like a method group and then choose an appropriate delegate.
Note: there is a strong invariant that you only get arrays back from this that are exactly long. Putting arrays back into this of the wrong length will result in broken
behavior. Do not expose this pool outside of this class.
Public so that the caller can assert that the new SourceText read all the way to the end of this successfully.
Returns the leading whitespace of the line located at the specified position in the given snapshot.
Same as OverlapsHiddenPosition but doesn't throw on cancellation. Instead, returns false
in that case.
Generates a call to a method *through* an existing field or property symbol.
Generates an override of similar to the one
generated for anonymous types.
In VB it's more idiomatic to write things like Dim t = TryCast(obj, SomeType)
instead of Dim t As SomeType = TryCast(obj, SomeType), so we just elide the type
from the decl. For C# we don't want to do this though. We want to always include the
type and let the simplifier decide if it should be var or not.
Returns true if the binaryExpression consists of an expression that can never be negative,
such as length or unsigned numeric types, being compared to zero with greater than,
less than, or equals relational operator.
Gets a type by its metadata name to use for code analysis within a . This method
attempts to find the "best" symbol to use for code analysis, which is the symbol matching the first of the
following rules.
-
If only one type with the given name is found within the compilation and its referenced assemblies, that
type is returned regardless of accessibility.
-
If the current defines the symbol, that symbol is returned.
-
If exactly one referenced assembly defines the symbol in a manner that makes it visible to the current
, that symbol is returned.
-
Otherwise, this method returns .
The to consider for analysis.
The fully-qualified metadata type name to find.
The symbol to use for code analysis; otherwise, .
Gets implicit method, that wraps top-level statements.
Gets project-level effective severity of the given accounting for severity configurations from both the following sources:
1. Compilation options from ruleset file, if any, and command line options such as /nowarn, /warnaserror, etc.
2. Analyzer config documents at the project root directory or in ancestor directories.
Gets document-level effective severity of the given accounting for severity configurations from both the following sources:
1. Compilation options from ruleset file, if any, and command line options such as /nowarn, /warnaserror, etc.
2. Analyzer config documents at the document root directory or in ancestor directories.
Gets the effective diagnostic severity for the diagnostic ID corresponding to the
given by looking up the severity settings in the options.
If the provided options are specific to a particular tree, provide a non-null value
for to look up tree specific severity options.
Tries to get configured severity for the given
from bulk configuration analyzer config options, i.e.
'dotnet_analyzer_diagnostic.category-%RuleCategory%.severity = %severity%'
or
'dotnet_analyzer_diagnostic.severity = %severity%'
Docs: https://docs.microsoft.com/visualstudio/code-quality/use-roslyn-analyzers?view=vs-2019#set-rule-severity-of-multiple-analyzer-rules-at-once-in-an-editorconfig-file for details
Update a list in place, where a function has the ability to either transform or remove each item.
The type of items in the list.
The type of state argument passed to the transformation callback.
The list to update.
A function which transforms each element. The function returns the transformed list
element, or to remove the current item from the list.
The state argument to pass to the transformation callback.
Attempts to remove the first item selected by .
True if any item has been removed.
Takes an array of s and produces a single resultant with all their values merged together. Absolutely no ordering guarantee is
provided. It will be expected that the individual values from distinct enumerables will be interleaved
together.
This helper is useful when doign parallel processing of work where each job returns an , but one final stream is desired as the result.
Runs after task completes in any fashion (success, cancellation, faulting) and ensures the channel writer is
always completed. If the task faults then the exception from that task will be used to complete the channel
Returns true if is a given token is a child token of a certain type of parent node.
The type of the parent node.
The node that we are testing.
A function that, when given the parent node, returns the child token we are interested in.
Returns true if this node is found underneath the specified child in the given parent.
Creates a new tree of nodes from the existing tree with the specified old nodes replaced with a newly computed nodes.
The root of the tree that contains all the specified nodes.
The nodes from the tree to be replaced.
A function that computes a replacement node for
the argument nodes. The first argument is one of the original specified nodes. The second argument is
the same node possibly rewritten with replaced descendants.
Creates a new tree of tokens from the existing tree with the specified old tokens replaced with a newly computed tokens.
The root of the tree that contains all the specified tokens.
The tokens from the tree to be replaced.
A function that computes a replacement token for
the argument tokens. The first argument is one of the originally specified tokens. The second argument is
the same token possibly rewritten with replaced trivia.
Look inside a trivia list for a skipped token that contains the given position.
Look inside a trivia list for a skipped token that contains the given position.
Look inside a trivia list for a skipped token that contains the given position.
Look inside a trivia list for a skipped token that contains the given position.
If the position is inside of token, return that token; otherwise, return the token to the right.
If the position is inside of token, return that token; otherwise, return the token to the left.
Creates a new token with the leading trivia removed.
Creates a new token with the trailing trivia removed.
Finds the node within the given corresponding to the given .
If the is , then returns the given node.
Gets a list of ancestor nodes (including this node)
Returns the identifier, keyword, contextual keyword or preprocessor keyword touching this
position, or a token of Kind = None if the caret is not touching either.
If the position is inside of token, return that token; otherwise, return the token to the right.
If the position is inside of token, return that token; otherwise, return the token to the left.
Finds the node in the given corresponding to the given .
If the is , then returns the root node of the tree.
Returns the first non-whitespace position on the given line, or null if
the line is empty or contains only whitespace.
Returns the first non-whitespace position on the given line as an offset
from the start of the line, or null if the line is empty or contains only
whitespace.
Determines whether the specified line is empty or contains whitespace only.
merge provided spans to each distinct group of spans in ascending order
Returns true if the span encompasses the specified node or token and is contained within its trivia.
Returns true if the span encompasses a span between the specified nodes or tokens
and is contained within trivia around them.
Lazily returns all nested types contained (recursively) within this namespace or type.
In case of a type, it is included itself as the first result.
Standard format for displaying to the user.
No return type.
Contains enough information to determine whether two symbols have the same signature.
The token to the left of . This token may be touching the position.
The first token to the left of that we're not touching. Equal to
if we aren't touching .
Is in the base list of a type declaration. Note, this only counts when at the top level of the base list, not
*within* any type already in the base list. For example class C : $$ is in the base list. But class
C : A<$$> is not.
Performs several edits to a document. If multiple edits are made within the same
expression context, then the document/semantic-model will be forked after each edit
so that further edits can see if they're still safe to apply.
Performs several edits to a document. If multiple edits are made within the same
expression context, then the document/semantic-model will be forked after each edit
so that further edits can see if they're still safe to apply.
Performs several edits to a document. If multiple edits are made within the same
expression context, then the document/semantic-model will be forked after each edit
so that further edits can see if they're still safe to apply.
Performs several edits to a document. If multiple edits are made within a method
body then the document/semantic-model will be forked after each edit so that further
edits can see if they're still safe to apply.
Performs several edits to a document. If multiple edits are made within a method
body then the document/semantic-model will be forked after each edit so that further
edits can see if they're still safe to apply.
Helper function for fix-all fixes where individual fixes may affect the viability
of another. For example, consider the following code:
if ((double)x == (double)y)
In this code either cast can be removed, but at least one cast must remain. Even
though an analyzer marks both, a fixer must not remove both. One way to accomplish
this would be to have the fixer do a semantic check after each application. However
This is extremely expensive, especially for hte common cases where one fix does
not affect each other.
To address that, this helper groups fixes at certain boundary points. i.e. at
statement boundaries. If there is only one fix within the boundary, it does not
do any semantic verification. However, if there are multiple fixes in a boundary
it will call into to validate if the subsequent fix
can be made or not.
Creates a new instance of this text document updated to have the text specified.
Creates a new instance of this additional document updated to have the text specified.
Creates a new instance of this analyzer config document updated to have the text specified.
Stores the source information for an value. Helpful when
tracking down tokens which aren't properly disposed.
Stores the source information for an value. Helpful when
tracking down tokens which aren't properly disposed.
use in product code to get
and use
in test to get waiter.
indicate whether asynchronous listener is enabled or not.
it is tri-state since we want to retrieve this value, if never explicitly set, from environment variable
and then cache it.
we read value from environment variable (RoslynWaiterEnabled) because we want team, that doesn't have
access to Roslyn code (InternalVisibleTo), can use this listener/waiter framework as well.
those team can enable this without using API
indicate whether is enabled or not
it is tri-state since we want to retrieve this value, if never explicitly set, from environment variable
and then cache it.
we read value from environment variable (RoslynWaiterDiagnosticTokenEnabled) because we want team, that doesn't have
access to Roslyn code (InternalVisibleTo), can use this listener/waiter framework as well.
those team can enable this without using API
Provides a default value for .
Enable or disable TrackActiveTokens for test
Get Waiters for listeners for test
Wait for all of the instances to finish their
work.
This is a very handy method for debugging hangs in the unit test. Set a break point in the
loop, dig into the waiters and see all of the active values
representing the remaining work.
Get all saved DiagnosticAsyncToken to investigate tests failure easier
Returns a task which completes when all asynchronous operations currently tracked by this waiter are
completed. Asynchronous operations are expedited when possible, meaning artificial delays placed before
asynchronous operations are shortened.
Creates a task that will complete after a time delay, but can be expedited if an operation is waiting for
the task to complete.
The time to wait before completing the returned task, or TimeSpan.FromMilliseconds(-1) to wait indefinitely.
A cancellation token to observe while waiting for the task to complete.
if the delay compeleted normally; otherwise, if the delay completed due to a request to expedite the delay.
represents a negative time interval other than TimeSpan.FromMilliseconds(-1).
-or-
The argument's property is greater than .
The delay has been canceled.
Return for the given featureName
We have this abstraction so that we can have isolated listener/waiter in unit tests
Get for given feature.
same provider will return a singleton listener for same feature
Modification of the murmurhash2 algorithm. Code is simpler because it operates over
strings instead of byte arrays. Because each string character is two bytes, it is known
that the input will be an even number of bytes (though not necessarily a multiple of 4).
This is needed over the normal 'string.GetHashCode()' because we need to be able to generate
'k' different well distributed hashes for any given string s. Also, we want to be able to
generate these hashes without allocating any memory. My ideal solution would be to use an
MD5 hash. However, there appears to be no way to do MD5 in .NET where you can:
a) feed it individual values instead of a byte[]
b) have the hash computed into a byte[] you provide instead of a newly allocated one
Generating 'k' pieces of garbage on each insert and lookup seems very wasteful. So,
instead, we use murmur hash since it provides well distributed values, allows for a
seed, and allocates no memory.
Murmur hash is public domain. Actual code is included below as reference.
Provides mechanism to efficiently obtain bloom filter hash for a value. Backed by a single element cache.
Although calculating this hash isn't terribly expensive, it does involve multiple
(usually around 13) hashings of the string (the actual count is ).
The typical usage pattern of bloom filters is that some operation (eg: find references)
requires asking a multitude of bloom filters whether a particular value is likely contained.
The vast majority of those bloom filters will end up hashing that string to the same values, so
we put those values into a simple cache and see if it can be used before calculating.
Local testing has put the hit rate of this at around 99%.
Note that it's possible for this method to return an array from the cache longer than hashFunctionCount,
but if so, it's guaranteed that the values returned in the first hashFunctionCount entries are
the same as if the cache hadn't been used.
A documentation comment derived from either source text or metadata.
True if an error occurred when parsing.
The full XML text of this tag.
The text in the <example> tag. Null if no tag existed.
The text in the <summary> tag. Null if no tag existed.
The text in the <returns> tag. Null if no tag existed.
The text in the <value> tag. Null if no tag existed.
The text in the <remarks> tag. Null if no tag existed.
The names of items in <param> tags.
The names of items in <typeparam> tags.
The types of items in <exception> tags.
The item named in the <completionlist> tag's cref attribute.
Null if the tag or cref attribute didn't exist.
Used for method, to prevent new allocation of string
Cache of the most recently parsed fragment and the resulting DocumentationComment
Parses and constructs a from the given fragment of XML.
The fragment of XML to parse.
A DocumentationComment instance.
Helper class for parsing XML doc comments. Encapsulates the state required during parsing.
Parse and construct a from the given fragment of XML.
The fragment of XML to parse.
A DocumentationComment instance.
Returns the text for a given parameter, or null if no documentation was given for the parameter.
Returns the text for a given type parameter, or null if no documentation was given for the type parameter.
Returns the texts for a given exception, or an empty if no documentation was given for the exception.
An empty comment.
Finds the constructor which takes exactly one argument, which must be of type EditorBrowsableState.
It does not require that the EditorBrowsableAttribute and EditorBrowsableState types be those
shipped by Microsoft, but it does demand the types found follow the expected pattern. If at any
point that pattern appears to be violated, return null to indicate that an appropriate constructor
could not be found.
The TypeLib*Attribute classes that accept TypeLib*Flags with FHidden as an option all have two constructors,
one accepting a TypeLib*Flags and the other a short. This methods gets those two constructor symbols for any
of these attribute classes. It does not require that the either of these types be those shipped by Microsoft,
but it does demand the types found follow the expected pattern. If at any point that pattern appears to be
violated, return an empty enumerable to indicate that no appropriate constructors were found.
Helper for checking whether cycles exist in the extension ordering.
Throws if a cycle is detected.
A cycle was detected in the extension ordering.
Returns an that will call on
when it is disposed.
An optional interface which allows an environment to customize the behavior for synchronous methods that need to
block on the result of an asynchronous invocation. An implementation of this is provided in the MEF catalog when
applicable.
For Visual Studio, Microsoft.VisualStudio.Threading provides the JoinableTaskFactory.Run method, which is
the expected way to invoke an asynchronous method from a synchronous entry point and block on its completion.
Other environments may choose to use this or any other strategy, or omit an implementation of this interface to
allow callers to simply use .
New code is expected to use fully-asynchronous programming where possible. In cases where external APIs
restrict ability to be asynchronous, this service allows Roslyn to adhere to environmental policies related to
joining asynchronous work.
Utility class that can be used to track the progress of an operation in a threadsafe manner.
Utility class that can be used to track the progress of an operation in a threadsafe manner.
An XML parser that is designed to parse small fragments of XML such as those that appear in documentation comments.
PERF: We try to re-use the same underlying to reduce the allocation costs of multiple parses.
Parse the given XML fragment. The given callback is executed until either the end of the fragment
is reached or an exception occurs.
Type of an additional argument passed to the delegate.
The fragment to parse.
Action to execute while there is still more to read.
Additional argument passed to the callback.
It is important that the action advances the ,
otherwise parsing will never complete.
A text reader over a synthesized XML stream consisting of a single root element followed by a potentially
infinite stream of fragments. Each time "SetText" is called the stream is rewound to the element immediately
following the synthetic root node.
Current text to validate.
Determines, using heuristics, what the next likely value is in this enum.
Helper class to analyze the semantic effects of a speculated syntax node replacement on the parenting nodes.
Given an expression node from a syntax tree and a new expression from a different syntax tree,
it replaces the expression with the new expression to create a speculated syntax tree.
It uses the original tree's semantic model to create a speculative semantic model and verifies that
the syntax replacement doesn't break the semantics of any parenting nodes of the original expression.
Creates a semantic analyzer for speculative syntax replacement.
Original expression to be replaced.
New expression to replace the original expression.
Semantic model of node's syntax tree.
Cancellation token.
True if semantic analysis should be skipped for the replaced node and performed starting from parent of the original and replaced nodes.
This could be the case when custom verifications are required to be done by the caller or
semantics of the replaced expression are different from the original expression.
True if semantic analysis should fail when any of the invocation expression ancestors of in original code has overload resolution failures.
Original expression to be replaced.
First ancestor of which is either a statement, attribute, constructor initializer,
field initializer, default parameter initializer or type syntax node.
It serves as the root node for all semantic analysis for this syntax replacement.
Semantic model for the syntax tree corresponding to
Node which replaces the .
Note that this node is a cloned version of node, which has been re-parented
under the node to be speculated, i.e. .
Node created by replacing under node.
This node is used as the argument to the GetSpeculativeSemanticModel API and serves as the root node for all
semantic analysis of the speculated tree.
Speculative semantic model used for analyzing the semantics of the new tree.
Determines whether performing the given syntax replacement will change the semantics of any parenting expressions
by performing a bottom up walk from the up to
in the original tree and simultaneously walking bottom up from up to
in the speculated syntax tree and performing appropriate semantic comparisons.
Checks whether the semantic symbols for the and are non-null and compatible.
Determine if removing the cast could cause the semantics of System.Object method call to change.
E.g. Dim b = CStr(1).GetType() is necessary, but the GetType method symbol info resolves to the same with or without the cast.
Determines if the symbol is a non-overridable, non static method on System.Object (e.g. GetType)
Returns if items were already cached for this and
, otherwise. Callers should use this value to
determine if they should call or not. A result of does
*not* mean that is non-.
If the token1 is expected to be part of the leading trivia of the token2 then the trivia
before the token1FullSpanEnd, which the fullspan end of the token1 should be ignored
this will create a span that includes its trailing trivia of its previous token and leading trivia of its next token
for example, for code such as "class A { int ...", if given tokens are "A" and "{", this will return span [] of "class[ A { ]int ..."
which included trailing trivia of "class" which is previous token of "A", and leading trivia of "int" which is next token of "{"
Content hash of the original document the containing the invocation to be intercepted.
(See )
The position in the file of the invocation that was intercepted. This is the absolute
start of the name token being invoked (e.g. this.$$Goo(x, y, z)) (see ).
Content hash of the original document the containing the invocation to be intercepted.
(See )
The position in the file of the invocation that was intercepted. This is the absolute
start of the name token being invoked (e.g. this.$$Goo(x, y, z)) (see ).
Content hash of the original document the containing the invocation to be intercepted.
(See )
The position in the file of the invocation that was intercepted. This is the absolute
start of the name token being invoked (e.g. this.$$Goo(x, y, z)) (see ).
The original expression that is being replaced by . This will be in the
that points at.
The new node that n was replaced with. This will be in the that points at.
The original semantic model that was contained in.
A forked semantic model off of . In that model will have been replaced with .
Given a set of folders from build the namespace that would match
the folder structure. If a document is located in { "Bat" , "Bar", "Baz" } then the namespace could be
"Bat.Bar.Baz". If a rootNamespace is provided, it is prepended to the generated namespace.
Returns null if the folders contain parts that are invalid identifiers for a namespace.
Used when the consumeItems routine will only pull items on a single thread (never concurrently). produceItems
can be called concurrently on many threads.
Used when the consumeItems routine will only pull items on a single thread (never concurrently). produceItems
can be called on a single thread as well (never concurrently).
Version of when caller the prefers the results being pre-packaged into arrays to process.
Version of when the caller prefers working with a stream of results.
IEnumerable<TSource> -> Task. Callback receives IAsyncEnumerable items.
IAsyncEnumerable<TSource> -> Task. Callback receives IAsyncEnumerable items.
IEnumerable<TSource> -> Task. Callback receives ImmutableArray of items.
IAsyncEnumerable<TSource> -> Task. Callback receives ImmutableArray of items.
IEnumerable<TSource> -> Task<TResult> Callback receives an IAsyncEnumerable of items.
IAsyncEnumerable<TSource> -> Task<TResult>. Callback receives an IAsyncEnumerable of items.
IEnumerable<TSource> -> Task<ImmutableArray<TResult>>
IAsyncEnumerable<TSource> -> Task<ImmutableArray<TResult>>
Helper utility for the pattern of a pair of a production routine and consumption routine using a channel to
coordinate data transfer. The provided are used to create a , which will then then manage the rules and behaviors around the routines. Importantly, the
channel handles backpressure, ensuring that if the consumption routine cannot keep up, that the production
routine will be throttled.
is the routine called to actually produce the items. It will be passed an
action that can be used to write items to the channel. Note: the channel itself will have rules depending on if
that writing can happen concurrently multiple write threads or just a single writer. See for control of this when creating the channel.
is the routine called to consume the items. Similarly, reading can have just a
single reader or multiple readers, depending on the value passed into .
Helper as VB's CType doesn't work without arithmetic overflow.
Helper class to allow one to do simple regular expressions over a sequence of objects (as
opposed to a sequence of characters).
Matcher equivalent to (m*)
Matcher equivalent to (m+)
Matcher equivalent to (m_1|m_2|...|m_n)
Matcher equivalent to (m_1 ... m_n)
Matcher that matches an element if the provide predicate returns true.
Executes a for each operation on an in which iterations may run in parallel.
The type of the data in the source.
An enumerable data source.
An asynchronous delegate that is invoked once per element in the data source.
The argument or argument is .
A task that represents the entire for each operation.
The operation will execute at most operations in parallel.
Executes a for each operation on an in which iterations may run in parallel.
The type of the data in the source.
An enumerable data source.
A cancellation token that may be used to cancel the for each operation.
An asynchronous delegate that is invoked once per element in the data source.
The argument or argument is .
A task that represents the entire for each operation.
The operation will execute at most operations in parallel.
Executes a for each operation on an in which iterations may run in parallel.
The type of the data in the source.
An enumerable data source.
An object that configures the behavior of this operation.
An asynchronous delegate that is invoked once per element in the data source.
The argument or argument is .
A task that represents the entire for each operation.
Executes a for each operation on an in which iterations may run in parallel.
The type of the data in the source.
An enumerable data source.
A integer indicating how many operations to allow to run in parallel.
The task scheduler on which all code should execute.
A cancellation token that may be used to cancel the for each operation.
An asynchronous delegate that is invoked once per element in the data source.
The argument or argument is .
A task that represents the entire for each operation.
Executes a for each operation on an in which iterations may run in parallel.
The type of the data in the source.
An asynchronous enumerable data source.
An asynchronous delegate that is invoked once per element in the data source.
The argument or argument is .
A task that represents the entire for each operation.
The operation will execute at most operations in parallel.
Executes a for each operation on an in which iterations may run in parallel.
The type of the data in the source.
An asynchronous enumerable data source.
A cancellation token that may be used to cancel the for each operation.
An asynchronous delegate that is invoked once per element in the data source.
The argument or argument is .
A task that represents the entire for each operation.
The operation will execute at most operations in parallel.
Executes a for each operation on an in which iterations may run in parallel.
The type of the data in the source.
An asynchronous enumerable data source.
An object that configures the behavior of this operation.
An asynchronous delegate that is invoked once per element in the data source.
The argument or argument is .
A task that represents the entire for each operation.
Executes a for each operation on an in which iterations may run in parallel.
The type of the data in the source.
An asynchronous enumerable data source.
A integer indicating how many operations to allow to run in parallel.
The task scheduler on which all code should execute.
A cancellation token that may be used to cancel the for each operation.
An asynchronous delegate that is invoked once per element in the data source.
The argument or argument is .
A task that represents the entire for each operation.
Gets the default degree of parallelism to use when none is explicitly provided.
Stores the state associated with a ForEachAsync operation, shared between all its workers.
Specifies the type of data being enumerated.
The caller-provided cancellation token.
Registration with caller-provided cancellation token.
The delegate to invoke on each worker to run the enumerator processing loop.
This could have been an action rather than a func, but it returns a task so that the task body is an async Task
method rather than async void, even though the worker body catches all exceptions and the returned Task is ignored.
The on which all work should be performed.
Semaphore used to provide exclusive access to the enumerator.
The number of outstanding workers. When this hits 0, the operation has completed.
Any exceptions incurred during execution.
The number of workers that may still be created.
The delegate to invoke for each element yielded by the enumerator.
The internal token source used to cancel pending work.
Initializes the state object.
Queues another worker if allowed by the remaining degree of parallelism permitted.
This is not thread-safe and must only be invoked by one worker at a time.
Signals that the worker has completed iterating.
true if this is the last worker to complete iterating; otherwise, false.
Asynchronously acquires exclusive access to the enumerator.
Relinquishes exclusive access to the enumerator.
Stores an exception and triggers cancellation in order to alert all workers to stop as soon as possible.
The exception.
Completes the ForEachAsync task based on the status of this state object.
Stores the state associated with an IEnumerable ForEachAsync operation, shared between all its workers.
Specifies the type of data being enumerated.
Stores the state associated with an IAsyncEnumerable ForEachAsync operation, shared between all its workers.
Specifies the type of data being enumerated.
Stores the state associated with an IAsyncEnumerable ForEachAsync operation, shared between all its workers.
Specifies the type of data being enumerated.
Breaks an identifier string into constituent parts.
Provides a way to test two symbols for equivalence. While there are ways to ask for
different sorts of equivalence, the following must hold for two symbols to be considered
equivalent.
- The kinds of the two symbols must match.
- The names of the two symbols must match.
- The arity of the two symbols must match.
- If the symbols are methods or parameterized properties, then the signatures of the two
symbols must match.
- Both symbols must be definitions or must be instantiations. If they are instantiations,
then they must be instantiated in the same manner.
- The containing symbols of the two symbols must be equivalent.
- Nullability of symbols is not involved in the comparison.
Note: equivalence does not concern itself with whole symbols. Two types are considered
equivalent if the above hold, even if one type has different members than the other. Note:
type parameters, and signature parameters are not considered 'children' when comparing
symbols.
Options are provided to tweak the above slightly. For example, by default, symbols are
equivalent only if they come from the same assembly or different assemblies of the same simple name.
However, one can ask if two symbols are equivalent even if their assemblies differ.
Compares given symbols and for equivalence.
Compares given symbols and for equivalence and populates
with equivalent non-nested named type key-value pairs that are contained in different assemblies.
These equivalent named type key-value pairs represent possibly equivalent forwarded types, but this API doesn't perform any type forwarding equivalence checks.
This API is only supported for .
Worker for comparing two named types for equivalence. Note: The two
types must have the same TypeKind.
The first type to compare
The second type to compare
Map of equivalent non-nested types to be populated, such that each key-value pair of named types are equivalent but reside in different assemblies.
This map is populated only if we are ignoring assemblies for symbol equivalence comparison, i.e. is true.
True if the two types are equivalent.
Transforms baseName into a name that does not conflict with any name in 'reservedNames'
Ensures that any 'names' is unique and does not collide with any other name. Names that
are marked as IsFixed can not be touched. This does mean that if there are two names
that are the same, and both are fixed that you will end up with non-unique names at the
end.
Updates the names in to be unique. A name at a particular
index i will not be touched if isFixed[i] is . All
other names will not collide with any other in and will all
return for canUse(name).
Gets a mutable reference to a stored in a using variable.
This supporting method allows , a non-copyable
implementing , to be used with using statements while still allowing them to
be passed by reference in calls. The following two calls are equivalent:
using var array = TemporaryArray<T>.Empty;
// Using the 'Unsafe.AsRef' method
Method(ref Unsafe.AsRef(in array));
// Using this helper method
Method(ref array.AsRef());
âš Do not move or rename this method without updating the corresponding
RS0049
analyzer.
The type of element stored in the temporary array.
A read-only reference to a temporary array which is part of a using statement.
A mutable reference to the temporary array.
Provides temporary storage for a collection of elements. This type is optimized for handling of small
collections, particularly for cases where the collection will eventually be discarded or used to produce an
.
This type stores small collections on the stack, with the ability to transition to dynamic storage if/when
larger number of elements are added.
The type of elements stored in the collection.
The number of elements the temporary can store inline. Storing more than this many elements requires the
array transition to dynamic storage.
The first inline element.
This field is only used when is . In other words, this type
stores elements inline or stores them in , but does not use both approaches
at the same time.
The second inline element.
The third inline element.
The fourth inline element.
The number of inline elements held in the array. This value is only used when is
.
A builder used for dynamic storage of collections that may exceed the limit for inline elements.
This field is initialized to non- the first time the
needs to store more than four elements. From that point, is used instead of inline
elements, even if items are removed to make the result smaller than four elements.
Create an with the elements currently held in the temporary array, and clear the
array.
Create an with the elements currently held in the temporary array, and clear
the array.
Transitions the current from inline storage to dynamic storage storage. An
instance is taken from the shared pool, and all elements currently in inline
storage are added to it. After this point, dynamic storage will be used instead of inline storage.
Throws .
This helper improves the ability of the JIT to inline callers.
Implementation of an backed by a contiguous array of values. This is a more memory
efficient way to store an interval tree than the traditional binary tree approach. This should be used when the
values of the interval tree are known up front and will not change after the tree is created.
The nodes of this interval tree flatted into a single array. The root is as index 0. The left child of any
node at index i is at 2*i + 1 and the right child is at 2*i + 2. If a left/right child
index is beyond the length of this array, that is equivalent to that node not having such a child.
The binary tree we represent here is a *complete* binary tree (not to be confused with a *perfect* binary tree).
A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled,
and all nodes in the last level are as far left as possible.
Provides access to lots of common algorithms on this interval tree.
Creates a from an unsorted list of . This will
incur a delegate allocation to sort the values. If callers can avoid that allocation by pre-sorting the values,
they should do so and call instead.
will be sorted in place.
Creates an interval tree from a sorted list of values. This is more efficient than creating from an unsorted
list as building doesn't need to figure out where the nodes need to go n-log(n) and doesn't have to rebalance
anything (again, another n-log(n) operation). Rebalancing is particularly expensive as it involves tons of
pointer chasing operations, which is both slow, and which impacts the GC which has to track all those writes.
The values must be sorted such that given any two elements 'a' and 'b' in the list, if 'a' comes before 'b' in
the list, then it's "start position" (as determined by the introspector) must be less than or equal to 'b's
start position. This is a requirement for the algorithm to work correctly.
Wrapper type to allow the IntervalTreeHelpers type to work with this type.
Generic interface used to pass in the particular interval testing operation to performed on an interval tree. For
example checking if an interval 'contains', 'intersects', or 'overlaps' with a requested span. Will be erased at
runtime as it will always be passed through as a generic parameter that is a struct.
Base interface all interval trees need to implement to get full functionality. Callers are not expected to use
these methods directly. Instead, they are the low level building blocks that the higher level extension methods are
built upon. Consumers of an interval tree should use .Algorithms on the instance to get access to a wealth of
fast operations through the type.
Iterating an interval tree will return the intervals in sorted order based on the start point of the interval.
Adds all intervals within the tree within the given start/length pair that match the given predicate. Results are added to the array. The indicates if the search should stop after the first interval is found. Results will be
returned in a sorted order based on the start point of the interval.
The number of matching intervals found by the method.
Practically equivalent to with a check that at least one item was
found. However, separated out as a separate method as implementations can often be more efficient just
answering this question, versus the more complex "fill with intervals" question above.
Helpers for working with instances. Can be retrieved by calling .Extensions
on an interval tree instance. This is exposed as a struct instead of extension methods as the type inference
involved here is too complex for C# to handle (specifically using a TIntervalTree type), which would make
ergonomics extremely painful as the callsites would have to pass three type arguments along explicitly.
Helpers for working with instances. Can be retrieved by calling .Extensions
on an interval tree instance. This is exposed as a struct instead of extension methods as the type inference
involved here is too complex for C# to handle (specifically using a TIntervalTree type), which would make
ergonomics extremely painful as the callsites would have to pass three type arguments along explicitly.
Witness interface that allows transparent access to information about a specific
implementation without needing to know the specifics of that implementation. This allows to operate transparently over any
implementation. IntervalTreeHelpers constrains its TIntervalTreeWitness type to be a struct to ensure this can be
entirely reified and erased by the runtime.
Utility helpers used to allow code sharing for the different implementations of s.
An introspector that always throws. Used when we need to call an api that takes this, but we know will never
call into it due to other arguments we pass along.
Because we're passing the full span of all ints, we know that we'll never call into the introspector. Since
all intervals will always be in that span.
Struct based enumerator, so we can iterate an interval tree without allocating.
An interval tree represents an ordered tree data structure to store intervals of the form [start, end). It allows
you to efficiently find all intervals that intersect or overlap a provided interval.
Ths is the root type for all interval trees that store their data in a binary tree format. This format is good for
when mutation of the tree is expected, and a client wants to perform tests before and after such mutation.
Provides access to lots of common algorithms on this interval tree.
Wrapper type to allow the IntervalTreeHelpers type to work with this type.
Warning. Mutates the tree in place.
Initializes a new instance of that is
empty.
Initializes a new instance of that contains the specified span.
TextSpan contained by the span set.
Initializes a new instance of that contains the specified list of spans.
The spans to be added.
The list of spans will be sorted and normalized (overlapping and adjoining spans will be combined).
This constructor runs in O(N log N) time, where N = spans.Count.
is null.
Finds the union of two span sets.
The first span set.
The second span set.
The new span set that corresponds to the union of and .
This operator runs in O(N+M) time where N = left.Count, M = right.Count.
Either or is null.
Finds the overlap of two span sets.
The first span set.
The second span set.
The new span set that corresponds to the overlap of and .
This operator runs in O(N+M) time where N = left.Count, M = right.Count.
or is null.
Finds the intersection of two span sets.
The first span set.
The second span set.
The new span set that corresponds to the intersection of and .
This operator runs in O(N+M) time where N = left.Count, M = right.Count.
is null.
is null.
Finds the difference between two sets. The difference is defined as everything in the first span set that is not in the second span set.
The first span set.
The second span set.
The new span set that corresponds to the difference between and .
Empty spans in the second set do not affect the first set at all. This method returns empty spans in the first set that are not contained by any set in
the second set.
is null.
is null.
Determines whether two span sets are the same.
The first set.
The second set.
true if the two sets are equivalent, otherwise false.
Determines whether two span sets are not the same.
The first set.
The second set.
true if the two sets are not equivalent, otherwise false.
Determines whether this span set overlaps with another span set.
The span set to test.
true if the span sets overlap, otherwise false.
is null.
Determines whether this span set overlaps with another span.
The span to test.
true if this span set overlaps with the given span, otherwise false.
Determines whether this span set intersects with another span set.
Set to test.
true if the span sets intersect, otherwise false.
is null.
Determines whether this span set intersects with another span.
true if this span set intersects with the given span, otherwise false.
Gets a unique hash code for the span set.
A 32-bit hash code associated with the set.
Determines whether this span set is the same as another object.
The object to test.
true if the two objects are equal, otherwise false.
Provides a string representation of the set.
The string representation of the set.
Private constructor for use when the span list is already normalized.
An already normalized span list.
Contains the options that needs to be drilled down to the Simplification Engine
This option tells the simplification engine if the Qualified Name should be replaced by Alias
if the user had initially not used the Alias
This option influences the name reduction of members of a module in VB. If set to true, the
name reducer will e.g. reduce Namespace.Module.Member to Namespace.Member.
This option says that if we should simplify the Generic Name which has the type argument inferred
This option says if we should simplify the Explicit Type in Local Declarations
This option says if we should simplify to NonGeneric Name rather than GenericName
This option says if we should simplify from Derived types to Base types in Static Member Accesses
This option says if we should simplify away the or in member access expressions.
This option says if we should simplify away the . or . in field access expressions.
This option says if we should simplify away the . or . in property access expressions.
This option says if we should simplify away the . or . in method access expressions.
This option says if we should simplify away the . or . in event access expressions.
This option says if we should prefer keyword for Intrinsic Predefined Types in Declarations
This option says if we should prefer keyword for Intrinsic Predefined Types in Member Access Expression
Expands and Reduces subtrees.
Expansion:
1) Makes inferred names explicit (on anonymous types and tuples).
2) Replaces names with fully qualified dotted names.
3) Adds parentheses around expressions
4) Adds explicit casts/conversions where implicit conversions exist
5) Adds escaping to identifiers
6) Rewrites extension method invocations with explicit calls on the class containing the extension method.
Reduction:
1) Shortens dotted names to their minimally qualified form
2) Removes unnecessary parentheses
3) Removes unnecessary casts/conversions
4) Removes unnecessary escaping
5) Rewrites explicit calls to extension methods to use dot notation
6) Removes unnecessary tuple element names and anonymous type member names
The annotation the reducer uses to identify sub trees to be reduced.
The Expand operations add this annotation to nodes so that the Reduce operations later find them.
This is the annotation used by the simplifier and expander to identify Predefined type and preserving
them from over simplification
The annotation used to identify sub trees to look for symbol annotations on.
It will then add import directives for these symbol annotations.
Expand qualifying parts of the specified subtree, annotating the parts using the annotation.
Expand qualifying parts of the specified subtree, annotating the parts using the annotation.
Expand qualifying parts of the specified subtree, annotating the parts using the annotation.
Expand qualifying parts of the specified subtree, annotating the parts using the annotation.
Expand qualifying parts of the specified subtree, annotating the parts using the annotation.
Expand qualifying parts of the specified subtree, annotating the parts using the annotation.
Reduce all sub-trees annotated with found within the document. The annotated node and all child nodes will be reduced.
Reduce the sub-trees annotated with found within the subtrees identified with the specified .
The annotated node and all child nodes will be reduced.
Reduce the sub-trees annotated with found within the specified span.
The annotated node and all child nodes will be reduced.
Reduce the sub-trees annotated with found within the specified spans.
The annotated node and all child nodes will be reduced.
This annotation will be used by the expansion/reduction to annotate expanded syntax nodes to store the information that an
alias was used before expansion.
When applied to a SyntaxNode, prevents the simplifier from converting a type to 'var'.
Language agnostic defaults.
An annotation that holds onto information about a type or namespace symbol.
Checks a member access expression expr.Name and, if it is of the form this.Name or
Me.Name determines if it is safe to replace with just Name alone.
Analyzers are computed for visible documents
and open documents which had errors/warnings in the prior solution snapshot.
We want to analyze such non-visible, open documents to ensure that these
prior reported errors/warnings get cleared out from the error list if they are
no longer valid in the latest solution snapshot, hence ensuring error list has
no stale entries.
Analyzers are executed for all open documents.
Analyzers are executed for all documents in the current solution.
Analyzers are disabled for all documents.
Compiler warnings and errors are disabled for all documents.
Compiler warnings and errors are computed for visible documents
and open documents which had errors/warnings in the prior solution snapshot.
We want to analyze such non-visible, open documents to ensure that these
prior reported errors/warnings get cleared out from the error list if they are
no longer valid in the latest solution snapshot, hence ensuring error list has
no stale entries.
Compiler warnings and errors are computed for all open documents.
Compiler warnings and errors are computed for all documents in the current solution.
Given a particular project in the remote solution snapshot, return information about all the generated documents
in that project. The information includes the identity
information about the document, as well as its text . The local workspace can then
compare that to the prior generated documents it has to see if it can reuse those directly, or if it needs to
remove any documents no longer around, add any new documents, or change the contents of any existing documents.
Controls if the caller wants frozen source generator documents
included in the result, or if only the most underlying generated documents (produced by the real compiler should be included.
Given a particular set of generated document ids, returns the fully generated content for those documents.
Should only be called by the host for documents it does not know about, or documents whose checksum contents are
different than the last time the document was queried.
Controls if the caller wants frozen source generator documents
included in the result, or if only the most underlying generated documents (produced by the real compiler should be included.
Whether or not the specified analyzer references have source generators or not.
Returns the identities for all source generators found in the with equal to .
Returns whether or not the the with
equal to has any analyzers or source generators.
Information that uniquely identifies the content of a source-generated document and ensures the remote and local
hosts are in agreement on them.
Checksum originally produced from on
the server side. This may technically not be the same checksum that is produced on the client side once the
SourceText is hydrated there. See comments on for more details on when this happens.
Result of 's .
Result of .
Information that uniquely identifies the content of a source-generated document and ensures the remote and local
hosts are in agreement on them.
Checksum originally produced from on
the server side. This may technically not be the same checksum that is produced on the client side once the
SourceText is hydrated there. See comments on for more details on when this happens.
Result of 's .
Result of .
Checksum originally produced from on
the server side. This may technically not be the same checksum that is produced on the client side once the
SourceText is hydrated there. See comments on for more details on when this happens.
Result of 's .
Result of .
Cache of the for a generator to avoid repeatedly reading version information from disk;
this is a ConditionalWeakTable so having telemetry for older runs doesn't keep the generator itself alive.
A service that enables storing and retrieving of information associated with solutions,
projects or documents across runtime sessions.
A service that enables storing and retrieving of information associated with solutions,
projects or documents across runtime sessions.
Can throw. If it does, the caller () will attempt
to delete the database and retry opening one more time. If that fails again, the instance will be used.
Obsolete. Roslyn no longer supports a mechanism to perform arbitrary persistence of data. If such functionality
is needed, consumers are responsible for providing it themselves with whatever semantics are needed.
Handle that can be used with to read data for a
without needing to have the entire snapshot available.
This is useful for cases where acquiring an entire snapshot might be expensive (for example, during
solution load), but querying the data is still desired.
Handle that can be used with to read data for a
without needing to have the entire snapshot available.
This is useful for cases where acquiring an entire snapshot might be expensive (for example, during
solution load), but querying the data is still desired.
Handle that can be used with to read data for a
without needing to have the entire snapshot available.
This is useful for cases where acquiring an entire snapshot might be expensive (for example, during
solution load), but querying the data is still desired.
Handle that can be used with to read data for a
without needing to have the entire snapshot available.
This is useful for cases where acquiring an entire snapshot might be expensive (for example, during
solution load), but querying the data is still desired.
Handle that can be used with to read data for a
without needing to have the entire snapshot available.
This is useful for cases where acquiring an entire snapshot might be expensive (for example, during
solution load), but querying the data is still desired.
Handle that can be used with to read data for a
without needing to have the entire snapshot available.
This is useful for cases where acquiring an entire snapshot might be expensive (for example, during
solution load), but querying the data is still desired.
encoded bytes of a text value. Span
should not be NUL-terminated.
The database that is stored on disk and actually persists data across VS sessions.
An in-memory database that caches values before being transferred to . Does not persist across VS sessions.
Name of the different dbs.
1. "main" is the default that sqlite uses. This just allows us to be explicit that we
want this db.
2. "writecache" is the name for the in-memory write-cache db. Writes will be staged
there and will be periodically flushed to the real on-disk db to help with perf.
Perf measurements show this as significantly better than all other design options. It's
also one of the simplest in terms of the design.
The design options in order of performance (slowest to fastest) are:
1. send writes directly to the main db. this is incredibly slow (since each write incurs
the full IO overhead of a transaction). It is the absolute simplest in terms of
implementation though.
2. send writes to a temporary on-disk db (with synchronous=off and journal_mode=memory),
then flush those to the main db. This is also quite slow due to their still needing to
be disk IO with each write. Implementation is fairly simple, with writes just going to
the temp db and reads going to both.
3. Buffer writes in (.net) memory and flush them to disk. This is much faster than '1'
or '2' but requires a lot of manual book-keeping and extra complexity. For example, any
reads go to the db. So that means that reads have to ensure that any writes to the same
rows have been persisted so they can observe them.
4. send writes to an sqlite in-memory cache DB. This is extremely fast for sqlite as
there is no actual IO that is performed. It is also easy in terms of bookkeeping as
both DBs have the same schema and are easy to move data between. '4' is faster than all
of the above. Complexity is minimized as reading can be done just by examining both DBs
in the same way. It's not as simple as '1' but it's much simpler than '3'.
Simple wrapper struct for a that helps ensure that the statement is always has it's
bindings cleared () and is after it is
used.
See https://sqlite.org/c3ref/stmt.html:
The life-cycle of a prepared statement object usually goes like this:
1) Create the prepared statement object using sqlite3_prepare_v2().
2) Bind values to parameters using the sqlite3_bind_* () interfaces.
3) Run the SQL by calling sqlite3_step() one or more times.
4) Reset the prepared statement using sqlite3_reset() then go back to step 2. Do this zero or more times.
5) Destroy the object using sqlite3_finalize().
This type helps ensure that '4' happens properly by clients executing statement.
Note that destroying/finalizing a statement is not the responsibility of a client
as it will happen to all prepared statemnets when the is
d.
Simple wrapper struct for a that helps ensure that the statement is always has it's
bindings cleared () and is after it is
used.
See https://sqlite.org/c3ref/stmt.html:
The life-cycle of a prepared statement object usually goes like this:
1) Create the prepared statement object using sqlite3_prepare_v2().
2) Bind values to parameters using the sqlite3_bind_* () interfaces.
3) Run the SQL by calling sqlite3_step() one or more times.
4) Reset the prepared statement using sqlite3_reset() then go back to step 2. Do this zero or more times.
5) Destroy the object using sqlite3_finalize().
This type helps ensure that '4' happens properly by clients executing statement.
Note that destroying/finalizing a statement is not the responsibility of a client
as it will happen to all prepared statemnets when the is
d.
Encapsulates a connection to a sqlite database. On construction an attempt will be made
to open the DB if it exists, or create it if it does not.
Connections are considered relatively heavyweight and are pooled (see ). Connections can be used by different
threads, but only as long as they are used by one thread at a time. They are not safe for concurrent use by several
threads.
s can be created through the user of .
These statements are cached for the lifetime of the connection and are only finalized
(i.e. destroyed) when the connection is closed.
The raw handle to the underlying DB.
Our cache of prepared statements for given sql strings.
Whether or not we're in a transaction. We currently don't supported nested transactions.
If we want that, we can achieve it through sqlite "save points". However, that's adds a
lot of complexity that is nice to avoid.
If a that happens during excution of should bubble out of this method or not. If , then the exception
will be returned in the result value instead
Represents a prepared sqlite statement. s can be
ed (i.e. executed). Executing a statement can result in
either if the command completed and produced no
value, or if it evaluated out to a sql row that can
then be queried.
If a statement is parameterized then parameters can be provided by the
BindXXX overloads. Bind is 1-based (to match sqlite).
When done executing a statement, the statement should be .
The easiest way to ensure this is to just use a 'using' statement along with
a . By resetting the statement, it can
then be used in the future with new bound parameters.
Finalization/destruction of the underlying raw sqlite statement is handled
by .
Represents a prepared sqlite statement. s can be
ed (i.e. executed). Executing a statement can result in
either if the command completed and produced no
value, or if it evaluated out to a sql row that can
then be queried.
If a statement is parameterized then parameters can be provided by the
BindXXX overloads. Bind is 1-based (to match sqlite).
When done executing a statement, the statement should be .
The easiest way to ensure this is to just use a 'using' statement along with
a . By resetting the statement, it can
then be used in the future with new bound parameters.
Finalization/destruction of the underlying raw sqlite statement is handled
by .
Implementation of an backed by SQLite.
Gets a from the connection pool, or creates one if none are available.
Database connections have a large amount of overhead, and should be returned to the pool when they are no
longer in use. In particular, make sure to avoid letting a connection lease cross an
boundary, as it will prevent code in the asynchronous operation from using the existing connection.
Only use this overload if it is safe to bypass the normal scheduler check. Only startup code (which runs
before any reads/writes/flushes happen) should use this.
Abstracts out access to specific tables in the DB. This allows us to share overall
logic around cancellation/pooling/error-handling/etc, while still hitting different
db tables.
Gets the internal sqlite db-id (effectively the row-id for the doc or proj table, or just the string-id
for the solution table) for the provided caller key. This db-id will be looked up and returned if a
mapping already exists for it in the db. Otherwise, a guaranteed unique id will be created for it and
stored in the db for the future. This allows all associated data to be cheaply associated with the
simple ID, avoiding lots of db bloat if we used the full in numerous places.
Whether or not the caller owns the write lock and thus is ok with the DB id
being generated and stored for this component key when it currently does not exist. If then failing to find the key will result in being returned.
Lock file that ensures only one database is made per process per solution.
For testing purposes. Allows us to test what happens when we fail to acquire the db lock file.
Use a to simulate a reader-writer lock.
Read operations are performed on the
and writes are performed on the .
We use this as a condition of using the in-memory shared-cache sqlite DB. This DB
doesn't busy-wait when attempts are made to lock the tables in it, which can lead to
deadlocks. Specifically, consider two threads doing the following:
Thread A starts a transaction that starts as a reader, and later attempts to perform a
write. Thread B is a writer (either started that way, or started as a reader and
promoted to a writer first). B holds a RESERVED lock, waiting for readers to clear so it
can start writing. A holds a SHARED lock (it's a reader) and tries to acquire RESERVED
lock (so it can start writing). The only way to make progress in this situation is for
one of the transactions to roll back. No amount of waiting will help, so when SQLite
detects this situation, it doesn't honor the busy timeout.
To prevent this scenario, we control our access to the db explicitly with operations that
can concurrently read, and operations that exclusively write.
All code that reads or writes from the db should go through this.
Returns null in the case where an IO exception prevented us from being able to acquire
the db lock file.
Mapping from the workspace's ID for a document, to the ID we use in the DB for the document.
Kept locally so we don't have to hit the DB for the common case of trying to determine the
DB id for a document.
Given a document, and the name of a stream to read/write, gets the integral DB ID to
use to find the data inside the DocumentData table.
responsible for storing and
retrieving data from .
responsible for storing and
retrieving data from .
A queue to batch up flush requests and ensure that we don't issue then more often than every .
Amount of time to wait between flushing writes to disk. 500ms means we can flush
writes to disk two times a second.
We use a pool to cache reads/writes that are less than 4k. Testing with Roslyn,
99% of all writes (48.5k out of 49.5k) are less than that size. So this helps
ensure that we can pool as much as possible, without caching excessively large
arrays (for example, Roslyn does write out nearly 50 chunks that are larger than
100k each).
The max amount of byte[]s we cache. This caps our cache at 4MB while allowing
us to massively speed up writing (by batching writes). Because we can write to
disk two times a second. That means a total of 8MB/s that can be written to disk
using only our cache. Given that Roslyn itself only writes about 50MB to disk
after several minutes of analysis, this amount of bandwidth is more than sufficient.
Mapping from the workspace's ID for a project, to the ID we use in the DB for the project.
Kept locally so we don't have to hit the DB for the common case of trying to determine the
DB id for a project.
Given a project, and the name of a stream to read/write, gets the integral DB ID to
use to find the data inside the ProjectData table.
responsible for storing and
retrieving data from .
responsible for storing and
retrieving data from .
responsible for storing and
retrieving data from . Note that with the Solution
table there is no need for key->id translation. i.e. the key acts as the ID itself.
responsible for storing and
retrieving data from . Note that with the Solution
table there is no need for key->id translation. i.e. the key acts as the ID itself.
Inside the DB we have a table dedicated to storing strings that also provides a unique
integral ID per string. This allows us to store data keyed in a much more efficient
manner as we can use those IDs instead of duplicating strings all over the place. For
example, there may be many pieces of data associated with a file. We don't want to
key off the file path in all these places as that would cause a large amount of bloat.
Because the string table can map from arbitrary strings to unique IDs, it can also be
used to create IDs for compound objects. For example, given the IDs for the FilePath
and Name of a Project, we can get an ID that represents the project itself by just
creating a compound key of those two IDs. This ID can then be used in other compound
situations. For example, a Document's ID is creating by compounding its Project's
ID, along with the IDs for the Document's FilePath and Name.
The format of the table is:
StringInfo
--------------------------------------------------------------------
| StringDataId (int, primary key, auto increment) | Data (varchar) |
--------------------------------------------------------------------
Inside the DB we have a table for data corresponding to the . The
data is just a blob that is keyed by a string Id. Data with this ID can be retrieved
or overwritten.
The format of the table is:
SolutionData
----------------------------------------------------
| DataNameId (int) | Checksum (blob) | Data (blob) |
----------------------------------------------------
| Primary Key |
--------------------
Inside the DB we have a table for data that we want associated with a . The data is
keyed off of the path of the project and its name. That way different TFMs will have different keys.
The format of the table is:
ProjectData
------------------------------------------------------------------------------------------------
| ProjectPathId (int) | ProjectNameId (int) | DataNameId (int) | Checksum (blob) | Data (blob) |
------------------------------------------------------------------------------------------------
| Primary Key |
----------------------------------------------------------------
Inside the DB we have a table for data that we want associated with a . The data is
keyed off the project information, and the folder and name of the document itself. This allows the majority
of the key to be shared (project path/name, and folder name) with other documents, and only having the doc
name portion be distinct. Different TFM flavors will also share everything but the project name.
The format of the table is:
DocumentData
------------------------------------------------------------------------------------------------------------------------------------------------
| ProjectPathId (int) | ProjectNameId (int) | DocumentFolderId (int) | DocumentNameId (int) | DataNameId (int) | Checksum (blob) | Data (blob) |
------------------------------------------------------------------------------------------------------------------------------------------------
| Primary Key |
------------------------------------------------------------------------------------------------------------------
The roslyn simple name of the type to search for. For example
would have the name ImmutableArray
The arity of the type. For example would have arity
1.
The roslyn simple name of the type to search for. For example
would have the name ImmutableArray
The arity of the type. For example would have arity
1.
The roslyn simple name of the type to search for. For example
would have the name ImmutableArray
The arity of the type. For example would have arity
1
The names comprising the namespace being searched for. For example ["System", "Collections",
"Immutable"].
The names comprising the namespace being searched for. For example ["System", "Collections",
"Immutable"].
The names comprising the namespace being searched for. For example ["System", "Collections",
"Immutable"].
Searches for packages that contain a type with the provided name and arity. Note: Implementations are free to
return the results they feel best for the given data. Specifically, they can do exact or fuzzy matching on the
name. They can use or ignore the arity depending on their capabilities.
Implementations should return results in order from best to worst (from their perspective).
Searches for packages that contain an assembly with the provided name. Note: Implementations are free to return
the results they feel best for the given data. Specifically, they can do exact or fuzzy matching on the name.
Implementations should return results in order from best to worst (from their perspective).
Searches for reference assemblies that contain a type with the provided name and arity.
Note: Implementations are free to return the results they feel best for the
given data. Specifically, they can do exact or fuzzy matching on the name.
They can use or ignore the arity depending on their capabilities.
Implementations should return results in order from best to worst (from their
perspective).
Service that allows you to query the SymbolSearch database and which keeps
the database up to date.
Interface to allow host (VS) to inform the OOP service to start incrementally analyzing and
reporting results back to the host.
Determines locations of 'todo' comments within a particular file. The specific 'todo' comment forms (e.g.
'TODO', 'UNDONE', etc.) are provided through .
Serialization type used to pass information to/from OOP and VS.
Serialization type used to pass information to/from OOP and VS.
Description of a TODO comment type to find in a user's comments.
Description of a TODO comment type to find in a user's comments.
Adds an execution time telemetry event representing
only if block duration meets or exceeds milliseconds.
Event data to be sent
Optional parameter used to determine whether to send the telemetry event (in milliseconds)
Adds a telemetry event with values obtained from context message
Returns an for logging telemetry.
FunctionId representing the telemetry operation
Returns an aggregating for logging histogram based telemetry.
FunctionId representing the telemetry operation
Optional values indicating bucket boundaries in milliseconds. If not specified,
all aggregating events created will use a default configuration
Returns an aggregating for logging counter telemetry.
FunctionId representing the telemetry operation
Flushes all telemetry logs
Provides access to the telemetry service to workspace services.
Abstract away the actual implementation of the telemetry service (e.g. Microsoft.VisualStudio.Telemetry).
True if a telemetry session has started.
True if the active session belongs to a Microsoft internal user.
Serialized the current telemetry settings. Returns if session hasn't started.
Adds a used to log unexpected exceptions.
Removes a used to log unexpected exceptions.
Feature name used in telemetry.
Provides access to posting telemetry events or adding information
to aggregated telemetry events. Posts pending telemetry at 30
minute intervals.
Posts a telemetry event representing the operation with context message
Posts a telemetry event representing the operation
only if the block duration meets or exceeds milliseconds.
This event will contain properties from and the actual execution time.
Properties to be set on the telemetry event
Optional parameter used to determine whether to send the telemetry event
Adds information to an aggregated telemetry event representing the operation
with the specified name and value.
Adds block execution time to an aggregated telemetry event representing the operation
with metric only if the block duration meets or exceeds milliseconds.
Optional parameter used to determine whether to send the telemetry event
Returns non-aggregating telemetry log.
Returns aggregating telemetry log.
Temporarily stores text and streams in memory mapped files.
The maximum size in bytes of a single storage unit in a memory mapped file which is shared with other storage
units.
The value of 256k reduced the number of files dumped to separate memory mapped files by 60% compared to
the next lower power-of-2 size for Roslyn.sln itself.
The size in bytes of a memory mapped file created to store multiple temporary objects.
This value (8mb) creates roughly 35 memory mapped files (around 300MB) to store the contents of all of
Roslyn.sln a snapshot. This keeps the data safe, so that we can drop it from memory when not needed, but
reconstitute the contents we originally had in the snapshot in case the original files change on disk.
The synchronization object for accessing the memory mapped file related fields (indicated in the remarks
of each field).
PERF DEV NOTE: A concurrent (but complex) implementation of this type with identical semantics is
available in source control history. The use of exclusive locks was not causing any measurable
performance overhead even on 28-thread machines at the time this was written.
The most recent memory mapped file for creating multiple storage units. It will be used via bump-pointer
allocation until space is no longer available in it. Access should be synchronized on
The name of the current memory mapped file for multiple storage units. Access should be synchronized on
The total size of the current memory mapped file for multiple storage units. Access should be
synchronized on
The offset into the current memory mapped file where the next storage unit can be held. Access should be
synchronized on .
Allocate shared storage of a specified size.
"Small" requests are fulfilled from oversized memory mapped files which support several individual
storage units. Larger requests are allocated in their own memory mapped files.
The size of the shared storage block to allocate.
A describing the allocated block.
Our own abstraction on top of memory map file so that we can have shared views over mmf files.
Otherwise, each view has minimum size of 64K due to requirement forced by windows.
most of our view will have short lifetime, but there are cases where view might live a bit longer such as
metadata dll shadow copy. shared view will help those cases.
This class and its nested types have familiar APIs and predictable behavior when used in other code, but
are non-trivial to work on.
Our own abstraction on top of memory map file so that we can have shared views over mmf files.
Otherwise, each view has minimum size of 64K due to requirement forced by windows.
most of our view will have short lifetime, but there are cases where view might live a bit longer such as
metadata dll shadow copy. shared view will help those cases.
This class and its nested types have familiar APIs and predictable behavior when used in other code, but
are non-trivial to work on.
The memory mapped file.
A weak reference to a read-only view for the memory mapped file.
This holds a weak counted reference to current , which allows
additional accessors for the same address space to be obtained up until the point when no external code is
using it. When the memory is no longer being used by any
objects, the view of the memory mapped file is unmapped, making the process address space it previously
claimed available for other purposes. If/when it is needed again, a new view is created.
This view is read-only, so it is only used by .
The name of the memory mapped file. Non null on systems that support named memory mapped files, null
otherwise..
The offset into the memory mapped file of the region described by the current
.
The size of the region of the memory mapped file described by the current
.
Caller is responsible for disposing the returned stream.
multiple call of this will not increase VM.
Caller is responsible for disposing the returned stream.
multiple call of this will increase VM.
Run a function which may fail with an if not enough memory is available to satisfy
the request. In this case, a full compacting GC pass is forced and the function is attempted again.
and will use a native memory map,
which can't trigger a GC. In this case, we'd otherwise crash with OOM, so we don't care about creating a UI
delay with a full forced compacting GC. If it crashes the second try, it means we're legitimately out of
resources.
The type of argument to pass to the callback.
The type returned by the function.
The function to execute.
The argument to pass to the function.
The value returned by .
Workspace service for cache implementations.
May be raised by a Workspace host when available memory is getting low in order to request
that caches be flushed.
Extensible document properties specified via a document service.
The LSP client name that should get the diagnostics produced by this document; any other source
will not show these diagnostics. For example, razor uses this to exclude diagnostics from the error list
so that they can handle the final display.
If null, the diagnostics do not have this special handling.
excerpt some part of
return of given and
the result might not be an exact copy of the given source or contains more then given span
this mode shows intention not actual behavior. it is up to implementation how to interpret the intention.
Result of excerpt
Result of excerpt
excerpt content
span on that given got mapped to
classification information on the
this excerpt is from
should be same document in
span on this excerpt is from
should be same text span in
TODO: Merge into .
Used by Razor via IVT.
document version of
indicates whether this document supports diagnostics or not
Empty interface just to mark document services.
Gets a document specific service provided by the host identified by the service type.
If the host does not provide the service, this method returns null.
Map spans in a document to other spans even in other document
this will be used by various features if provided to convert span in one document to other spans.
for example, it is used to show spans users expect in a razor file rather than spans in
auto generated file that is implementation detail or navigate to the right place rather
than the generated file and etc.
Whether this span mapping service can handle mapping import directives added to a document.
Map spans in the document to more appropriate locations
in current design, this can NOT map a span to a span that is not backed by a file.
for example, roslyn supports someone to have a document that is not backed by a file. and current design doesn't allow
such document to be returned from this API
for example, span on razor secondary buffer document in roslyn solution mapped to a span on razor cshtml file is possible but
a span on razor cshtml file to a span on secondary buffer document is not possible since secondary buffer document is not backed by a file
Document given spans belong to
Spans in the document
Cancellation token
Return mapped span. order of result should be same as the given span
Result of span mapping
Path to mapped file
LinePosition representation of the Span
Mapped span
MEF metadata class used to find exports declared for a specific .
MEF metadata class used to find exports declared for a specific .
helper type to track whether has been initialized.
currently, this helper only supports services whose lifetime is same as Host (ex, VS)
helper type to track whether has been initialized.
currently, this helper only supports services whose lifetime is same as Host (ex, VS)
MEF export attribute for
one of values from indicating which service this event listener is for
indicate which workspace kind this event listener is for
provide a way for features to lazily subscribe to a service event for particular workspace
see for supported services
Ensure is called for the workspace
list of well known types
Per-language services provided by the host environment.
Language services which implement are considered ownable, in which case the
owner is responsible for disposing of owned instances when they are no longer in use. The ownership rules are
described in detail for . Instances of have
the same ownership rules as , and instances of
have the same ownership rules as
.
The that originated this language service.
The name of the language
Immutable snapshot of the host services. Preferable to use instead of this when possible.
Gets a language specific service provided by the host identified by the service type.
If the host does not provide the service, this method returns null.
Gets a language specific service provided by the host identified by the service type.
If the host does not provide the service, this method returns throws .
A factory for creating compilations instances.
Services provided by the host environment.
Creates a new workspace service.
Per workspace services provided by the host environment.
Workspace services which implement are considered ownable, in which case the
owner is responsible for disposing of owned instances when they are no longer in use. When
or instances are provided directly to the
, the owner of the instances is the type or container (e.g. a MEF export
provider) which created the instances. For the specific case of ownable workspace services created by a factory
(i.e. instances returned by ), the
is considered the owner of the resulting instance and is expected to be disposed during the call to
.
Summary of lifetime rules
-
instance constructed externally (e.g. MEF): Owned by the
external source, and will not be automatically disposed when is disposed.
-
instance constructed externally (e.g. MEF): Owned by
the external source, and will not be automatically disposed when is disposed.
-
instance constructed by within
the context of : Owned by , and
will be automatically disposed when is disposed.
The host services this workspace services originated from.
The workspace corresponding to this workspace services instantiation
Gets a workspace specific service provided by the host identified by the service type.
If the host does not provide the service, this method returns null.
Gets a workspace specific service provided by the host identified by the service type.
If the host does not provide the service, this method throws .
The host does not provide the service.
Obsolete. Roslyn no longer supports a mechanism to perform arbitrary persistence of data. If such functionality
is needed, consumers are responsible for providing it themselves with whatever semantics are needed.
Obsolete. Roslyn no longer supports a mechanism to store arbitrary data in-memory. If such functionality
is needed, consumers are responsible for providing it themselves with whatever semantics are needed.
A factory that constructs .
A list of language names for supported language services.
Returns true if the language is supported.
Gets the for the language name.
Thrown if the language isn't supported.
Finds all language services of the corresponding type across all supported languages that match the filter criteria.
Allows the host to provide fallback editorconfig options for a language loaded into the workspace.
Empty interface just to mark language services.
Empty interface just to mark workspace services.
Per language services provided by the host environment.
Use this attribute to declare a implementation for MEF
file extensions this can handle such as cshtml
match will be done by
Use this attribute to declare a implementation for inclusion in a MEF-based workspace.
Declares a implementation for inclusion in a MEF-based workspace.
The type that will be used to retrieve the service from a .
The language that the service is target for; , etc.
The layer that the service is specified for; , etc.
Use this attribute to declare a implementation for inclusion in a MEF-based workspace.
Declares a implementation for inclusion in a MEF-based workspace.
The type that will be used to retrieve the service from a .
The language that the service is target for; , etc.
The layer that the service is specified for; , etc.
The assembly qualified name of the service's type.
The language that the service is target for. Specify a value from , or other language name.
The layer that the service is specified for. Specify a value from .
s that the service is specified for.
If non-empty the service is only exported for the listed workspace kinds and is not applied,
unless is in which case the export overrides all other exports.
Use this attribute to declare a implementation for inclusion in a MEF-based workspace.
Declares a implementation for inclusion in a MEF-based workspace.
The type that will be used to retrieve the service from a .
The language that the service is target for; , etc.
The layer that the service is specified for; , etc.
Use this attribute to declare a implementation for inclusion in a MEF-based workspace.
Declares a implementation for inclusion in a MEF-based workspace.
The type that will be used to retrieve the service from a .
The language that the service is target for; , etc.
The layer that the service is specified for; , etc.
The assembly qualified name of the service's type.
The language that the service is target for. Specify a value from , or other language name.
The layer that the service is specified for. Specify a value from .
s that the service is specified for.
If non-empty the service is only exported for the listed workspace kinds and is not applied,
unless is in which case the export overrides all other exports.
Use this attribute to declare a implementation for inclusion in a MEF-based workspace.
Declares a implementation for inclusion in a MEF-based workspace.
The type that will be used to retrieve the service from a .
The layer that the service is specified for; , etc.
Use this attribute to declare a implementation for inclusion in a MEF-based workspace.
Declares a implementation for inclusion in a MEF-based workspace.
The type that will be used to retrieve the service from a .
The layer that the service is specified for; , etc.
The assembly qualified name of the service's type.
The layer that the service is specified for. Specify a value from .
s that the service is specified for.
If non-empty the service is only exported for the listed workspace kinds and is not applied,
unless is in which case the export overrides all other exports.
Use this attribute to declare a implementation for inclusion in a MEF-based workspace.
Declares a implementation for inclusion in a MEF-based workspace.
The type that will be used to retrieve the service from a .
The layer or workspace kind that the service is specified for; , etc.
Use this attribute to declare a implementation for inclusion in a MEF-based workspace.
Declares a implementation for inclusion in a MEF-based workspace.
The type that will be used to retrieve the service from a .
The layer or workspace kind that the service is specified for; , etc.
The assembly qualified name of the service's type.
The layer that the service is specified for. Specify a value from .
s that the service is specified for.
If non-empty the service is only exported for the listed workspace kinds and is not applied,
unless is in which case the export overrides all other exports.
A factory that creates instances of a specific .
Implement a when you want to provide instances that use other services.
Creates a new instance.
The that can be used to access other services.
A factory that creates instances of a specific .
Implement a when you want to provide instances that use other services.
Creates a new instance.
Returns null if the service is not applicable to the given workspace.
The that can be used to access other services.
This delegate allows test code to override the behavior of .
Injects replacement behavior for the method.
The layer of an exported service.
If there are multiple definitions of a service, the is used to determine which is used.
Service layer that overrides , and .
Service layer that overrides , and .
Service layer that overrides and .
Service layer that overrides .
The base service layer.
MEF metadata class used to find exports declared for a specific file extensions.
This interface is provided purely to enable some shared logic that handles multiple kinds of
metadata that share the Language property. It should not be used to find exports via MEF,
use LanguageMetadata instead.
This interface is provided purely to enable some shared logic that handles multiple kinds of
metadata that share the Languages property. It should not be used to find exports via MEF,
use LanguageMetadata instead.
MEF metadata class used to find exports declared for a specific language.
MEF metadata class used to find exports declared for a specific language.
MEF metadata class used for finding and exports.
MEF metadata class used for finding and exports.
Layers in the priority order. services override services, etc.
MEF metadata class used for finding and exports.
MEF metadata class used for finding and exports.
Abstract implementation of an analyzer assembly loader that can be used by VS/VSCode to provide a with an appropriate path.
Provides a way to map from an assembly name to the actual path of the .NET Framework
assembly with that name in the context of a specified project. For example, if the
assembly name is "System.Data" then a project targeting .NET 2.0 would resolve this
to a different path than a project targeting .NET 4.5.
Returns null if the assembly name could not be resolved.
An optional type name for a type that must
exist in the assembly.
The project context to search within.
The name of the assembly to try to resolve.
A cache for metadata references.
A cache for metadata references.
A collection of references to the same underlying metadata, each with different properties.
A collection of references to the same underlying metadata, each with different properties.
The solution this is a storage instance for.
if the data we have for the solution with the given has the
provided .
if the data we have for the given with the given has the provided .
if the data we have for the given with the given has the provided .
Reads the stream for the solution with the given . If
is provided, the persisted checksum must match it. If there is no such stream with that name, or the
checksums do not match, then will be returned.
Reads the stream for the with the given . If
is provided, the persisted checksum must match it. If there is no such stream with that name, or the
checksums do not match, then will be returned.
Reads the stream for the with the given . If
is provided, the persisted checksum must match it. If there is no such stream with that name, or the
checksums do not match, then will be returned.
Reads the stream for the solution with the given . An optional can be provided to store along with the data. This can be used along with ReadStreamAsync
with future reads to ensure the data is only read back if it matches that checksum.
Returns if the data was successfully persisted to the storage subsystem. Subsequent
calls to read the same keys should succeed if called within the same session.
Reads the stream for the with the given . An optional
can be provided to store along with the data. This can be used along with
ReadStreamAsync with future reads to ensure the data is only read back if it matches that checksum.
Returns if the data was successfully persisted to the storage subsystem. Subsequent
calls to read the same keys should succeed if called within the same session.
Reads the stream for the with the given . An optional
can be provided to store along with the data. This can be used along with
ReadStreamAsync with future reads to ensure the data is only read back if it matches that checksum.
Returns if the data was successfully persisted to the storage subsystem. Subsequent
calls to read the same keys should succeed if called within the same session.
Returns if the data was successfully persisted to the storage subsystem. Subsequent
calls to read the same keys should succeed if called within the same session.
Returns if the data was successfully persisted to the storage subsystem. Subsequent
calls to read the same keys should succeed if called within the same session.
Instances of support both synchronous and asynchronous disposal. Asynchronous
disposal should always be preferred as the implementation of synchronous disposal may end up blocking the caller
on async work.
Returns if the data was successfully persisted to the storage subsystem. Subsequent
calls to read the same keys should succeed if called within the same session.
Returns if the data was successfully persisted to the storage subsystem. Subsequent
calls to read the same keys should succeed if called within the same session.
Returns if the data was successfully persisted to the storage subsystem. Subsequent
calls to read the same keys should succeed if called within the same session.
Configuration of the intended to be used to override behavior in tests.
Indicates that the client expects the DB to succeed at all work and that it should not ever gracefully fall over.
Should not be set in normal host environments, where it is completely reasonable for things to fail
(for example, if a client asks for a key that hasn't been stored yet).
Used to ensure that the path components we generate do not contain any characters that might be invalid in a
path. For example, Base64 encoding will use / which is something that we definitely do not want
errantly added to a path.
Obsolete. Roslyn no longer supports a mechanism to perform arbitrary persistence of data. If such functionality
is needed, consumers are responsible for providing it themselves with whatever semantics are needed.
Per solution services provided by the host environment.
Note: do not expose publicly. exposes a which we want to avoid doing from our immutable snapshots.
Gets the for the language name.
Thrown if the language isn't supported.
provides info on the given file
this will be used to provide dynamic content such as generated content from cshtml to workspace
we acquire this from exposed from external components such as razor for cshtml
provides info on the given file
this will be used to provide dynamic content such as generated content from cshtml to workspace
we acquire this from exposed from external components such as razor for cshtml
The path to the generated file. in future, we will use this to get right options from editorconfig
return for this file
return to load content for the dynamic file
True if the source code contained in the document is only used in design-time (e.g. for completion),
but is not passed to the compiler when the containing project is built, e.g. a Razor view.
return for the content it provided
Provider for the
implementer of this service should be pure free-thread meaning it can't switch to UI thread underneath.
otherwise, we can get into dead lock if we wait for the dynamic file info from UI thread
return for the context given
this file belongs to
full path to project file (ex, csproj)
full path to non source file (ex, cshtml)
null if this provider can't handle the given file
let provider know certain file has been removed
this file belongs to
full path to project file (ex, csproj)
full path to non source file (ex, cshtml)
indicate content of a file has updated. the event argument "string" should be same as "filepath" given to
Provides workspace status
this is an work in-progress interface, subject to be changed as we work on prototype.
it can completely removed at the end or new APIs can added and removed as prototype going on
no one except one in the prototype group should use this interface.
tracking issue - https://github.com/dotnet/roslyn/issues/34415
Indicate that status has changed
Wait until workspace is fully loaded
unfortunately, some hosts, such as VS, use services (ex, IVsOperationProgressStatusService) that require UI thread to let project system to proceed to next stages.
what that means is that this method should only be used with either await or JTF.Run, it should be never used with Task.Wait otherwise, it can
deadlock
Indicates whether workspace is fully loaded
Unfortunately, some hosts, such as VS, use services (ex, IVsOperationProgressStatusService) that require UI
thread to let project system to proceed to next stages. what that means is that this method should only be
used with either await or JTF.Run, it should be never used with Task.Wait otherwise, it can deadlock.
Factory service for creating syntax trees.
Returns true if the two options differ only by preprocessor directives; this allows for us to reuse trees
if they don't have preprocessor directives in them.
A factory that creates either sequential or parallel task schedulers.
Workspace service that provides instance.
API to allow a client to write data to memory-mapped-file storage. That data can be read back in within the same
process using a handle returned from the writing call. The data can optionally be read back in from a different
process, using the information contained with the handle's Identifier (see ), but only on systems that support named memory mapped files. Currently, this
is any .net on Windows and mono on unix systems. This is not supported on .net core on unix systems (tracked here
https://github.com/dotnet/runtime/issues/30878). This is not a problem in practice as cross process sharing is only
needed by the VS host, which is windows only.
Write the provided to a new memory-mapped-file. Returns a handle to the data that can
be used to identify the data across processes allowing it to be read back in in any process.
This type is primarily used to allow dumping metadata to disk. This then allowing them to be read in by mapping
their data into types like . It also allows them to be read in by our server
process, without having to transmit the data over the wire.
Note: The stream provided must support . The stream will also be reset to
0 within this method. The caller does not need to reset the stream
itself.
Write the provided to a new memory-mapped-file. Returns a handle to the data that can
be used to identify the data across processes allowing it to be read back in in any process.
This type is primarily used to allow dumping source texts to disk. This then allowing them to be read in by
mapping their data into types like . It also allows them
to be read in by our server process, without having to transmit the data over the wire.
"/>
Represents a handle to data stored to temporary storage (generally a memory mapped file). As long as this handle is
alive, the data should remain in storage and can be readable from any process using the information provided in . Use to write the data to temporary storage and get a handle to it. Use to read the data back in any process.
Reads the data indicated to by this handle into a stream. This stream can be created in a different process
than the one that wrote the data originally.
Legacy implementation of obsolete public API .
Identifier for a stream of data placed in a segment of a memory mapped file. Can be used to identify that segment
across processes (where supported), allowing for efficient sharing of data.
The name of the segment in the temporary storage. on platforms that don't
support cross process sharing of named memory mapped files.
Identifier for a stream of data placed in a segment of a memory mapped file. Can be used to identify that segment
across processes (where supported), allowing for efficient sharing of data.
The name of the segment in the temporary storage. on platforms that don't
support cross process sharing of named memory mapped files.
The name of the segment in the temporary storage. on platforms that don't
support cross process sharing of named memory mapped files.
A factory for creating instances.
Creates from a stream.
The stream to read the text from. Must be readable and seekable. The text is read from the start of the stream.
Specifies an encoding to be used if the actual encoding can't be determined from the stream content (the stream doesn't start with Byte Order Mark).
If not specified auto-detect heuristics are used to determine the encoding. If these heuristics fail the decoding is assumed to be the system encoding.
Note that if the stream starts with Byte Order Mark the value of is ignored.
Algorithm to calculate content checksum.
Cancellation token.
The stream content can't be decoded using the specified , or
is null and the stream appears to be a binary file.
An IO error occurred while reading from the stream.
Creates from a reader with given .
The to read the text from.
Specifies an encoding for the SourceText.
it could be null. but if null is given, it won't be able to calculate checksum
Algorithm to calculate content checksum.
Cancellation token.
Available in workspaces that accept changes in solution level analyzers.
Options that affect behavior of workspace core APIs (, , , , etc.) to which it would be impractical to flow these options
explicitly. The options are instead provided by . The remote instance of
this service is initialized based on the in-proc values (which themselves are loaded from global options) when we
establish connection from devenv to ServiceHub process. If another process connects to our ServiceHub process before
that the remote instance provides a predefined set of options that can later be updated
when devenv connects to the ServiceHub process.
Options that affect behavior of workspace core APIs (, , , , etc.) to which it would be impractical to flow these options
explicitly. The options are instead provided by . The remote instance of
this service is initialized based on the in-proc values (which themselves are loaded from global options) when we
establish connection from devenv to ServiceHub process. If another process connects to our ServiceHub process before
that the remote instance provides a predefined set of options that can later be updated
when devenv connects to the ServiceHub process.
These values are such that the correctness of remote services is not affected if these options are changed from defaults
to non-defaults while the services have already been executing.
Source generators should re-run after any change to a project.
Source generators should re-run only when certain changes happen. The set of things is host dependent, but
generally should be things like "builds" or "file saves". Larger events (not just text changes) which indicate
that it's a more reasonable time to run generators.
Gets extended host language services, which includes language services from .
The documentation provider used to lookup xml docs for any metadata reference we pass out. They'll
all get the same xml doc comment provider (as different references to the same compilation don't
see the xml docs any differently). This provider does root a Compilation around. However, this should
not be an issue in practice as the compilation we are rooting is a clone of the acutal compilation of
project, and not the compilation itself. This clone doesn't share any symbols/semantics with the main
compilation, and it can dump syntax trees whenever necessary. What is does store is the compact
decl-table which is safe and cheap to hold onto long term. When some downstream consumer of this
metadata-reference then needs to get xml-doc comments, it will resolve a doc-comment-id against this
decl-only-compilation. Resolution is very cheap, only causing the necessary symbols referenced directly
in the ID to be created. As downstream consumers are only likely to resolve a small handful of these
symbols in practice, this should not be expensive to hold onto. Importantly, semantic models and
complex method binding/caching should never really happen with this compilation.
The documentation provider used to lookup xml docs for any metadata reference we pass out. They'll
all get the same xml doc comment provider (as different references to the same compilation don't
see the xml docs any differently). This provider does root a Compilation around. However, this should
not be an issue in practice as the compilation we are rooting is a clone of the acutal compilation of
project, and not the compilation itself. This clone doesn't share any symbols/semantics with the main
compilation, and it can dump syntax trees whenever necessary. What is does store is the compact
decl-table which is safe and cheap to hold onto long term. When some downstream consumer of this
metadata-reference then needs to get xml-doc comments, it will resolve a doc-comment-id against this
decl-only-compilation. Resolution is very cheap, only causing the necessary symbols referenced directly
in the ID to be created. As downstream consumers are only likely to resolve a small handful of these
symbols in practice, this should not be expensive to hold onto. Importantly, semantic models and
complex method binding/caching should never really happen with this compilation.
A class used to provide XML documentation to the compiler for members from metadata from an XML document source.
Gets the source stream for the XML document.
The cancellation token.
Creates an from bytes representing XML documentation data.
The XML document bytes.
An .
Creates an from an XML documentation file.
The path to the XML file.
An .
A trivial XmlDocumentationProvider which never returns documentation.
Returns true if a type name matches a document name. We use
case insensitive matching to determine this match so that files
"a.cs" and "A.cs" both match a class called "A"
Standard way to get the display name from a SyntaxNode. If the display
name is null, returns false. Otherwise uses
Gets a type name based on a document name. Returns null
if the document has no name or the document has invalid characters in the name
such that would throw.
A workspace that allows full manipulation of projects and documents,
but does not persist changes.
A workspace that allows full manipulation of projects and documents,
but does not persist changes.
Returns true, signifiying that you can call the open and close document APIs to add the document into the open document list.
Clears all projects and documents from the workspace.
Adds an entire solution to the workspace, replacing any existing solution.
Adds a project to the workspace. All previous projects remain intact.
Adds a project to the workspace. All previous projects remain intact.
Adds multiple projects to the workspace at once. All existing projects remain intact.
Adds a document to the workspace.
Adds a document to the workspace.
Puts the specified document into the open state.
Puts the specified document into the closed state.
Puts the specified additional document into the open state.
Puts the specified additional document into the closed state
Puts the specified analyzer config document into the open state.
Puts the specified analyzer config document into the closed state
Create a structure initialized from a compilers command line arguments.
Create a structure initialized with data from a compiler command line.
Retrieves information about what documents are currently active or visible in the host workspace. Note: this
information is fundamentally racy (it can change directly after it is requested), and on different threads than the
thread that asks for it. As such, this information must only be used to provide a hint towards how a
feature should go about its work, it must not impact the final results that a feature produces. For example, a
feature is allowed to use this information to decide what order to process documents in, to try to get more relevant
results to a client more quickly. However, it is not allowed to use this information to decide what results to
return altogether. Hosts are free to implement this service to do nothing at all, always returning empty/default
values for the members within. As per the above, this should never affect correctness, but it may impede a
feature's ability to provide results in as timely a manner as possible for a client.
Get the of the active document. May be null if there is no active document, the
active document is not in the workspace, or if this functionality is not supported by a particular host.
Get a read only collection of the s of all the visible documents in the workspace. May
be empty if there are no visible documents, or if this functionality is not supported by a particular host.
Fired when the active document changes. A host is not required to support this event, even if it implements
.
Gets the active the user is currently working in. May be null if
there is no active document or the active document is not in this .
Get a read only collection of all the unique visible documents in the workspace that are
contained within .
Can be acquired from , with .
Basic no-op impl on .Net Framework. We can't actually isolate anything in .Net Framework, so we just return the
assembly references as is.
Given a set of analyzer references, attempts to return a new set that is in an isolated AssemblyLoadContext so
that the analyzers and generators from it can be safely loaded side-by-side with prior versions of the same
references that may already be loaded.
Given a checksum for a set of analyzer references, fetches the existing ALC-isolated set of them if already
present in this process. Otherwise, this fetches the raw serialized analyzer references from the host side,
then creates and caches an isolated set on the OOP side to hold onto them, passing out that isolated set of
references to be used by the caller (normally to be stored in a solution snapshot).
A file change context used to watch metadata references. This is lazy to avoid creating this immediately during
our LSP process startup, when we don't yet know the LSP client's capabilities.
File watching tokens from that are watching metadata references. These
are only created once we are actually applying a batch because we don't determine until the batch is applied if
the file reference will actually be a file reference or it'll be a converted project reference.
Stores the caller for a previous disposal of a reference produced by this class, to track down a double-dispose
issue.
This can be removed once https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1843611 is fixed.
Starts watching a particular for changes to the file. If this is already being
watched , the reference count will be incremented. This is *not* safe to attempt to call multiple times for the
same project and reference (e.g. in applying workspace updates)
Decrements the reference count for the given . When the reference count reaches
0, the file watcher will be stopped. This is *not* safe to attempt to call multiple times for the same project
and reference (e.g. in applying workspace updates)
Gives a hint to the that we should watch a top-level directory for all changes in addition
to any files called by .
This is largely intended as an optimization; consumers should still call
for files they want to watch. This allows the caller to give a hint that it is expected that most of the files being
watched is under this directory, and so it's more efficient just to watch _all_ of the changes in that directory
rather than creating and tracking a bunch of file watcher state for each file separately. A good example would be
just creating a single directory watch on the root of a project for source file changes: rather than creating a file watcher
for each individual file, we can just watch the entire directory and that's it.
If non-null, only watch the directory for changes to a specific extension. String always starts with a period.
A context that is watching one or more files.
Raised when a file has been changed. This may be a file watched explicitly by or it could be any
file in the directory if the was watching a directory.
Starts watching a file but doesn't wait for the file watcher to be registered with the operating system. Good if you know
you'll need a file watched (eventually) but it's not worth blocking yet.
When a FileChangeWatcher already has a watch on a directory, a request to watch a specific file is a no-op. In that case, we return this token,
which when disposed also does nothing.
Represents an additional file passed down to analyzers.
Aggregate analyzer config options for a specific path.
These options do not fall back.
Checksum of data can be used later to see whether two data are same or not
without actually comparing data itself
Checksum of data can be used later to see whether two data are same or not
without actually comparing data itself
The intended size of the structure.
Represents a default/null/invalid Checksum, equivalent to default(Checksum). This values contains
all zeros which is considered infinitesimally unlikely to ever happen from hashing data (including when
hashing null/empty/zero data inputs).
Create Checksum from given byte array. if byte array is bigger than , it will be
truncated to the size.
Create Checksum from given byte array. if byte array is bigger than , it will be
truncated to the size.
Paths of files produced by the compilation.
Full path to the assembly or module produced by the compilation, or if unknown.
Absolute path to the root directory of source generated files, or null if it is not known.
True if the project has an absolute generated source file output path.
Must be true for any workspace that supports EnC. If false, the compiler and IDE wouldn't agree on the file paths of source-generated files,
which might cause different metadata to be emitted for file-scoped classes between compilation and EnC.
Absolute path of a directory used to produce absolute file paths of source generated files.
This value source keeps a strong reference to a value.
This value source keeps a strong reference to a value.
Not built from a text loader.
for regular C#/VB files.
Represents a source code document that is part of a project.
It provides access to the source text, parsed syntax tree and the corresponding semantic model.
A cached reference to the .
A cached reference to the .
A cached task that can be returned once the tree has already been created. This is only set if returns true,
so the inner value can be non-null.
The kind of source code this document contains.
True if the info of the document change (name, folders, file path; not the content)
Get the current syntax tree for the document if the text is already loaded and the tree is already parsed.
In almost all cases, you should call to fetch the tree, which will parse the tree
if it's not already parsed.
Get the current syntax tree version for the document if the text is already loaded and the tree is already parsed.
In almost all cases, you should call to fetch the version, which will load the tree
if it's not already available.
Gets the version of the document's top level signature if it is already loaded and available.
Gets the version of the syntax tree. This is generally the newer of the text version and the project's version.
if this Document supports providing data through the
and methods.
If then these methods will return instead.
if this Document supports providing data through the
method.
If then that method will return instead.
Gets the for this document asynchronously.
The returned syntax tree can be if the returns . This function may cause computation to occur the first time it is called, but will return
a cached result every subsequent time. 's can hold onto their roots lazily. So calls
to or may end up causing computation
to occur at that point.
Gets the root node of the current syntax tree if the syntax tree has already been parsed and the tree is still cached.
In almost all cases, you should call to fetch the root node, which will parse
the document if necessary.
Gets the root node of the syntax tree asynchronously.
The returned will be if returns . This function will return
the same value if called multiple times.
Only for features that absolutely must run synchronously (probably because they're
on the UI thread). Right now, the only feature this is for is Outlining as VS will
block on that feature from the UI thread when a document is opened.
Gets the current semantic model for this document if the model is already computed and still cached.
In almost all cases, you should call , which will compute the semantic model
if necessary.
Gets the current nullable disabled semantic model for this document if the model is already computed and still cached.
In almost all cases, you should call , which will compute the semantic model
if necessary.
Gets the nullable disabled semantic model for this document asynchronously.
The returned may be if returns . This function will
return the same value if called multiple times.
Gets the semantic model for this document asynchronously.
The returned may be if returns . This function will
return the same value if called multiple times.
Gets the semantic model for this document asynchronously.
The returned may be if returns . This function will
return the same value if called multiple times.
Creates a new instance of this document updated to have the source code kind specified.
Creates a new instance of this document updated to have the text specified.
Creates a new instance of this document updated to have a syntax tree rooted by the specified syntax node.
Creates a new instance of this document updated to have the specified name.
Creates a new instance of this document updated to have the specified folders.
Creates a new instance of this document updated to have the specified file path.
Get the text changes between this document and a prior version of the same document. The changes, when applied
to the text of the old document, will produce the text of the current document.
Similar to , but should be used when in a forced
synchronous context.
Gets the list of s that are linked to this
. s are considered to be linked if they
share the same . This is excluded from the
result.
Creates a branched version of this document that has its semantic model frozen in whatever state it is available
at the time, assuming a background process is constructing the semantics asynchronously. Repeated calls to this
method may return documents with increasingly more complete semantics.
Use this method to gain access to potentially incomplete semantics quickly.
Note: this will give back a solution where this 's project will not run generators
when getting its compilation. However, all other projects will still run generators when their compilations are
requested.
If then a forked document will be returned no matter what. This
should be used when the caller wants to ensure that further forks of that document will remain frozen and will
not run generators/skeletons. For example, if it is about to transform the document many times, and is fine with
the original semantic information they started with. If then this same document may be
returned if the compilation for its was already produced. In this case, generators and
skeletons will already have been run, so returning the same instance will be fast when getting semantics.
However, this does mean that future forks of this instance will continue running generators/skeletons. This
should be used for most clients that intend to just query for semantic information and do not intend to make any
further changes.
Returns the options that should be applied to this document. This consists of global options from ,
merged with any settings the user has specified at the document levels.
This method is async because this may require reading other files. In files that are already open, this is expected to be cheap and complete synchronously.
An identifier that can be used to retrieve the same across versions of the
workspace.
Creates a new instance.
The project id this document id is relative to.
An optional name to make this id easier to recognize while debugging.
A class that represents all the arguments necessary to create a new document instance.
The Id of the document.
The name of the document.
The names of the logical nested folders the document is contained in.
The kind of the source code.
The file path of the document.
True if the document is a side effect of the build.
A loader that can retrieve the document text.
A associated with this document
Create a new instance of a .
Creates info.
type that contains information regarding this document itself but
no tree information such as document info
type that contains information regarding this document itself but
no tree information such as document info
The Id of the document.
The name of the document.
The names of the logical nested folders the document is contained in.
The kind of the source code.
The file path of the document.
True if the document is a side effect of the build.
True if the source code contained in the document is only used in design-time (e.g. for completion),
but is not passed to the compiler when the containing project is built, e.g. a Razor view
when we're linked to another file (a 'sibling') and will attempt to reuse
that sibling's tree as our own. Note: we won't know if we can actually use the contents of that sibling file
until we actually go and realize it, as it may contains constructs (like pp-directives) that prevent use. In
that case, we'll fall back to a normal incremental parse between our original and the latest text contents of our sibling's file.
when we're linked to another file (a 'sibling') and will attempt to reuse
that sibling's tree as our own. Note: we won't know if we can actually use the contents of that sibling file
until we actually go and realize it, as it may contains constructs (like pp-directives) that prevent use. In
that case, we'll fall back to a normal incremental parse between our original and the latest text contents of our sibling's file.
Used as a fallback value in GetComputedTreeAndVersionSource to avoid long lazy chain evaluations.
Provides an ITreeAndVersionSource, returning either this instance or _originalTreeSource.
If the lazy computation has already completed, then this object passes back itself as it uses that
computation in it's ITreeAndVersionSource implementation.
If the lazy computation has not completed, we don't wish to pass back an object using it, as doing so might
lead to a long chain of lazy evaluations. Instead, use the originalTreeSource passed into this object.
Returns a new instance of this document state that points to as the
text contents of the document, and which will produce a syntax tree that reuses from if possible, or which will incrementally parse the current tree to bring it up to
date with otherwise.
A source for constructed from an syntax tree.
A source for constructed from an syntax tree.
Not created from a text loader.
Absolute path of the file.
Specifies an encoding to be used if the actual encoding of the file
can't be determined from the stream content (the stream doesn't start with Byte Order Mark).
If null auto-detect heuristics are used to determine the encoding.
Note that if the stream starts with Byte Order Mark the value of is ignored.
Creates a content loader for specified file.
An absolute file path.
Specifies an encoding to be used if the actual encoding can't be determined from the stream content (the stream doesn't start with Byte Order Mark).
If not specified auto-detect heuristics are used to determine the encoding.
Note that if the stream starts with Byte Order Mark the value of is ignored.
is null.
is not an absolute path.
We have this limit on file size to reduce a chance of OOM when user adds massive files to the solution (often by accident).
The threshold is 100MB which came from some internal data on big files and some discussion.
Creates from .
Stream.
Obsolete. Null.
Creates from .
Load a text and a version of the document in the workspace.
Load a text and a version of the document.
Computes the text changes between two documents.
The old version of the document.
The new version of the document.
The cancellation token.
An array of changes.
Computes the text changes between two documents.
The old version of the document.
The new version of the document.
The type of differencing to perform. Not supported by all text differencing services.
The cancellation token.
An array of changes.
Options used to load .
The mode in which value is preserved.
The value is guaranteed to have the same contents across multiple accesses.
The value is guaranteed to the same instance across multiple accesses.
Represents a project that is part of a .
The solution this project is part of.
The ID of the project. Multiple instances may share the same ID. However, only
one project may have this ID in any given solution.
The path to the project file or null if there is no project file.
The path to the output file, or null if it is not known.
The path to the reference assembly output file, or null if it is not known.
Compilation output file paths.
The default namespace of the project ("" if not defined, which means global namespace),
or null if it is unknown or not applicable.
Right now VB doesn't have the concept of "default namespace". But we conjure one in workspace
by assigning the value of the project's root namespace to it. So various feature can choose to
use it for their own purpose.
In the future, we might consider officially exposing "default namespace" for VB project
(e.g. through a "defaultnamespace" msbuild property)
if this supports providing data through the
method.
If then method will return instead.
The language services from the host environment associated with this project's language.
Immutable snapshot of language services from the host environment associated with this project's language.
Use this over when possible.
The language associated with the project.
The name of the assembly this project represents.
The name of the project. This may be different than the assembly name.
The list of all other metadata sources (assemblies) that this project references.
The list of all other projects within the same solution that this project references.
The list of all other projects that this project references, including projects that
are not part of the solution.
The list of all the diagnostic analyzer references for this project.
The options used by analyzers for this project.
The options used by analyzers for this project.
The options used when building the compilation for this project.
The options used when parsing documents for this project.
Returns true if this is a submission project.
True if the project has any documents.
All the document IDs associated with this project.
All the additional document IDs associated with this project.
All the additional document IDs associated with this project.
All the regular documents associated with this project. Documents produced from source generators are returned by
.
All the additional documents associated with this project.
All the s associated with this project.
True if the project contains a document with the specified ID.
True if the project contains an additional document with the specified ID.
True if the project contains an with the specified ID.
Get the documentId in this project with the specified syntax tree.
Get the document in this project with the specified syntax tree.
Get the document in this project with the specified document Id.
Get the additional document in this project with the specified document Id.
Get the analyzer config document in this project with the specified document Id.
Gets a document or a source generated document in this solution with the specified document ID.
Gets a document, additional document, analyzer config document or a source generated document in this solution with the specified document ID.
Gets all source generated documents in this project.
Returns the for a source generated document that has already been generated and observed.
This is only safe to call if you already have seen the SyntaxTree or equivalent that indicates the document state has already been
generated. This method exists to implement and is best avoided unless you're doing something
similarly tricky like that.
Tries to get the cached for this project if it has already been created and is still cached. In almost all
cases you should call which will either return the cached
or create a new one otherwise.
Get the for this project asynchronously.
Returns the produced , or if returns . This function will
return the same value if called multiple times.
Determines if the compilation returned by and all its referenced compilation are from fully loaded projects.
Gets an object that lists the added, changed and removed documents between this project and the specified project.
The project version. This equates to the version of the project file.
The version of the most recently modified document.
The most recent version of the project, its documents and all dependent projects and documents.
The semantic version of this project including the semantics of referenced projects.
This version changes whenever the consumable declarations of this project and/or projects it depends on change.
The semantic version of this project not including the semantics of referenced projects.
This version changes only when the consumable declarations of this project change.
Creates a new instance of this project updated to have the new assembly name.
Creates a new instance of this project updated to have the new default namespace.
Creates a new instance of this project updated to have the specified compilation options.
Creates a new instance of this project updated to have the specified parse options.
Creates a new instance of this project updated to include the specified project reference
in addition to already existing ones.
Creates a new instance of this project updated to include the specified project references
in addition to already existing ones.
Creates a new instance of this project updated to no longer include the specified project reference.
Creates a new instance of this project updated to replace existing project references
with the specified ones.
Creates a new instance of this project updated to include the specified metadata reference
in addition to already existing ones.
Creates a new instance of this project updated to include the specified metadata references
in addition to already existing ones.
Creates a new instance of this project updated to no longer include the specified metadata reference.
Creates a new instance of this project updated to replace existing metadata reference
with the specified ones.
Creates a new instance of this project updated to include the specified analyzer reference
in addition to already existing ones.
Creates a new instance of this project updated to include the specified analyzer references
in addition to already existing ones.
Creates a new instance of this project updated to no longer include the specified analyzer reference.
Creates a new instance of this project updated to replace existing analyzer references
with the specified ones.
Creates a new instance of this project updated to replace existing analyzer references
with the specified ones.
Creates a new document in a new instance of this project.
Creates a new document in a new instance of this project.
Creates a new document in a new instance of this project.
Creates a new additional document in a new instance of this project.
Creates a new additional document in a new instance of this project.
Creates a new analyzer config document in a new instance of this project.
Creates a new instance of this project updated to no longer include the specified document.
Creates a new instance of this project updated to no longer include the specified documents.
Creates a new instance of this project updated to no longer include the specified additional document.
Creates a new instance of this project updated to no longer include the specified additional documents.
Creates a new instance of this project updated to no longer include the specified analyzer config document.
Creates a new solution instance that no longer includes the specified s.
Retrieves fallback analyzer options for this project's language.
Get s of added documents in the order they appear in of the .
Get s of added dditional documents in the order they appear in of .
Get s of added analyzer config documents in the order they appear in of .
Get s of documents with any changes (textual and non-textual)
in the order they appear in of .
Get changed documents in the order they appear in of .
When is true, only get documents with text changes (we only check text source, not actual content);
otherwise get documents with any changes i.e. , and file path.
Get s of additional documents with any changes (textual and non-textual)
in the order they appear in of .
Get s of analyzer config documents with any changes (textual and non-textual)
in the order they appear in of .
Get s of removed documents in the order they appear in of .
Get s of removed additional documents in the order they appear in of .
Get s of removed analyzer config documents in the order they appear in of .
Represents a 'cone' of projects that is being sync'ed between the local and remote hosts. A project cone starts
with a , and contains both it and all dependent projects within .
A models the dependencies between projects in a solution.
The map of projects to dependencies. This field is always fully initialized. Projects which do not reference
any other projects do not have a key in this map (i.e. they are omitted, as opposed to including them with
an empty value).
- This field is always fully initialized
- Projects which do not reference any other projects do not have a key in this map (i.e.
they are omitted, as opposed to including them with an empty value)
- The keys and values in this map are always contained in
The lazily-initialized map of projects to projects which reference them. This field is either null, or
fully-computed. Projects which are not referenced by any other project do not have a key in this map (i.e.
they are omitted, as opposed to including them with an empty value).
Intentionally created with a null reverseReferencesMap. Doing so indicates _lazyReverseReferencesMap
shouldn't be calculated until reverse reference information is requested. Once this information
has been calculated, forks of this PDG will calculate their new reverse references in a non-lazy fashion.
Gets the list of projects that this project directly depends on.
Gets the list of projects that directly depend on this project.
Gets the list of projects that directly or transitively this project depends on, if it has already been
cached.
Gets the list of projects that directly or transitively this project depends on
Gets the list of projects that directly or transitively depend on this project.
Returns all the projects for the solution in a topologically sorted order with respect
to their dependencies. Projects that depend on other projects will always show up later in this sequence
than the projects they depend on.
Returns a sequence of sets, where each set contains items with shared interdependency,
and there is no dependency between sets. Each set returned will sorted in topological order.
Gets the list of projects that directly or transitively depend on this project, if it has already been
cached.
Checks whether depends on .
Computes a new for the addition of additional project references.
Computes a new for the addition of additional project references.
The previous , or
if the reverse references map was not computed for the previous graph.
Computes a new for the addition of additional project references.
Computes a new for the addition of new projects.
Computes a new for the removal of all project references from a
project.
The prior to the removal,
or if the reverse references map was not computed for the prior graph.
The project ID from which a project reference is being removed.
The targets of the project references which are being removed.
The updated (complete) reverse references map, or if the reverse references
map could not be incrementally updated.
Computes a new for the removal of a project.
The prior to the removal.
The prior to the removal.
This map serves as a hint to the removal process; i.e. it is assumed correct if it contains data, but may be
omitted without impacting correctness.
The ID of the project which is being removed.
The for the project dependency graph once the project is removed.
Computes a new for the removal of a project.
The prior to the removal,
or if the value prior to removal was not computed for the graph.
Computes a new for the removal of a project.
Computes a new for the removal of a project.
Computes a new for the removal of a project reference.
The prior to the removal,
or if the reverse references map was not computed for the prior graph.
The project ID from which a project reference is being removed.
The target of the project reference which is being removed.
The updated (complete) reverse references map, or if the reverse references
map could not be incrementally updated.
An identifier that can be used to refer to the same across versions.
This supports the general message-pack of being serializable. However, in
practice, this is not serialized directly, but through the use of a custom formatter
Checksum of this ProjectId, built only from .
The system generated unique id.
An optional name to show only for debugger-display purposes. This must not be used for any other
purpose. Importantly, it must not be part of the equality/hashing/comparable contract of this type (including
).
Create a new ProjectId instance.
An optional name to make this id easier to recognize while debugging.
A class that represents all the arguments necessary to create a new project instance.
The unique Id of the project.
The version of the project.
The name of the project. This may differ from the project's filename.
The name of the assembly that this project will create, without file extension.
,
The language of the project.
The path to the project file or null if there is no project file.
The path to the output file (module or assembly).
The path to the reference assembly output file.
The path to the compiler output file (module or assembly).
The default namespace of the project ("" if not defined, which means global namespace),
or null if it is unknown or not applicable.
Right now VB doesn't have the concept of "default namespace", but we conjure one in workspace
by assigning the value of the project's root namespace to it. So various features can choose to
use it for their own purpose.
In the future, we might consider officially exposing "default namespace" for VB project
(e.g. through a "defaultnamespace" msbuild property)
Algorithm to calculate content checksum for debugging purposes.
True if this is a submission project for interactive sessions.
True if project information is complete. In some workspace hosts, it is possible
a project only has partial information. In such cases, a project might not have all
information on its files or references.
True if we should run analyzers for this project.
True if the project contains references to the SDK CodeStyle analyzers.
The initial compilation options for the project, or null if the default options should be used.
The initial parse options for the source code documents in this project, or null if the default options should be used.
The list of source documents initially associated with the project.
The project references initially defined for the project.
The metadata references initially defined for the project.
The analyzers initially associated with this project.
The list of non-source documents associated with this project.
The list of analyzerconfig documents associated with this project.
Type of the host object.
Create a new instance of a .
Create a new instance of a .
type that contains information regarding this project itself but
no tree information such as document info
type that contains information regarding this project itself but
no tree information such as document info
Matches names like: Microsoft.CodeAnalysis.Features (netcoreapp3.1)
The unique Id of the project.
The version of the project.
The name of the project. This may differ from the project's filename.
The name of the assembly that this project will create, without file extension.
,
The language of the project.
The path to the project file or null if there is no project file.
The path to the output file (module or assembly).
The path to the reference assembly output file.
Paths to the compiler output files.
The default namespace of the project.
Algorithm to calculate content checksum for debugging purposes.
True if this is a submission project for interactive sessions.
True if project information is complete. In some workspace hosts, it is possible
a project only has partial information. In such cases, a project might not have all
information on its files or references.
True if we should run analyzers for this project.
The id report during telemetry events.
True if the project contains references to the SDK CodeStyle analyzers.
The name and flavor portions of the project broken out. For example, the project
Microsoft.CodeAnalysis.Workspace (netcoreapp3.1) would have the name
Microsoft.CodeAnalysis.Workspace and the flavor netcoreapp3.1. Values may be null if the name does not contain a flavor.
Aliases for the reference. Empty if the reference has no aliases.
True if interop types defined in the referenced project should be embedded into the referencing project.
Holds on a map from source path to calculated by the compiler and chained to .
This cache is stored on and needs to be invalidated whenever for the language of the project change,
editorconfig file is updated, etc.
Holds on a map from source path to calculated by the compiler and chained to .
This cache is stored on and needs to be invalidated whenever for the language of the project change,
editorconfig file is updated, etc.
The documents in this project. They are sorted by to provide a stable sort for
.
The additional documents in this project. They are sorted by to provide a stable sort for
.
The analyzer config documents in this project. They are sorted by to provide a stable sort for
.
Analyzer config options to be used for specific trees.
Provides editorconfig options for Razor design-time documents.
Razor does not support editorconfig options but has custom settings for a few formatting options whose values
are only available in-proc and the same for all Razor design-time documents.
This type emulates these options as analyzer config options.
Provides editorconfig options for Razor design-time documents.
Razor does not support editorconfig options but has custom settings for a few formatting options whose values
are only available in-proc and the same for all Razor design-time documents.
This type emulates these options as analyzer config options.
Updates to a newer version of attributes.
Determines whether contains a reference to a specified project.
The target project of the reference.
if this project references ; otherwise, .
Represents a set of projects and their source code documents.
Result of calling .
Mapping of DocumentId to the frozen solution we produced for it the last time we were queried. This
instance should be used as its own lock when reading or writing to it.
Per solution services provided by the host environment. Use this instead of when possible.
The Workspace this solution is associated with.
The Id of the solution. Multiple solution instances may share the same Id.
The path to the solution file or null if there is no solution file.
The solution version. This equates to the solution file's version.
A list of all the ids for all the projects contained by the solution.
A list of all the projects contained by the solution.
The version of the most recently modified project.
True if the solution contains a project with the specified project ID.
Gets the project in this solution with the specified project ID.
If the id is not an id of a project that is part of this solution the method returns null.
Gets the associated with an assembly symbol.
Given a returns the of the it came
from. Returns if does not come from any project in this solution.
This function differs from in terms of how it
treats s. Specifically, say there is the following:
Project-A, containing Symbol-A.
Project-B, with a reference to Project-A, and usage of Symbol-A.
It is possible (with retargeting, and other complex cases) that Symbol-A from Project-B will be a different
symbol than Symbol-A from Project-A. However,
will always try to return Project-A for either of the Symbol-A's, as it prefers to return the original
Source-Project of the original definition, not the project that actually produced the symbol. For many
features this is an acceptable abstraction. However, for some cases (Find-References in particular) it is
necessary to resolve symbols back to the actual project/compilation that produced them for correctness.
Returns the that produced the symbol. In the case of a symbol that was retargetted
this will be the compilation it was retargtted into, not the original compilation that it was retargetted from.
True if the solution contains the document in one of its projects
True if the solution contains the additional document in one of its projects
True if the solution contains the analyzer config document in one of its projects
Gets the documentId in this solution with the specified syntax tree.
Gets the documentId in this solution with the specified syntax tree.
Gets the document in this solution with the specified document ID.
Gets a document or a source generated document in this solution with the specified document ID.
Gets a document, additional document, analyzer config document or a source generated document in this solution with the specified document ID.
Gets the additional document in this solution with the specified document ID.
Gets the analyzer config document in this solution with the specified document ID.
Gets the document in this solution with the specified syntax tree.
Creates a new solution instance that includes a project with the specified language and names.
Returns the new project.
Creates a new solution instance that includes a project with the specified language and names.
Creates a new solution instance with the project specified updated to have the new
assembly name.
Creates a new solution instance with the project specified updated to have the output file path.
Creates a new solution instance with the project specified updated to have the reference assembly output file path.
Creates a new solution instance with the project specified updated to have the compiler output file path.
Creates a new solution instance with the project specified updated to have the default namespace.
Creates a new solution instance with the project specified updated to have the specified attributes.
Creates a new solution instance with the project specified updated to have the name.
Creates a new solution instance with the project specified updated to have the project file path.
Create a new solution instance with the project specified updated to have
the specified compilation options.
Create a new solution instance with the project specified updated to have
the specified parse options.
Create a new solution instance updated to use the specified .
Create a new solution instance with the project specified updated to have
the specified hasAllInformation.
Create a new solution instance with the project specified updated to have
the specified runAnalyzers.
Create a new solution instance with the project specified updated to have
the specified hasSdkCodeStyleAnalyzers.
Creates a new solution instance with the project documents in the order by the specified document ids.
The specified document ids must be the same as what is already in the project; no adding or removing is allowed.
is .
is .
The solution does not contain .
The number of documents specified in is not equal to the number of documents in project .
Document specified in does not exist in project .
Updates the solution with project information stored in .
Updates the solution with project information stored in .
Create a new solution instance with the project specified updated to include
the specified project reference.
is .
is .
The solution does not contain .
The project already references the target project.
Create a new solution instance with the project specified updated to include
the specified project references.
is .
contains .
contains duplicate items.
The solution does not contain .
The project already references the target project.
Adding the project reference would create a circular dependency.
Create a new solution instance with the project specified updated to no longer
include the specified project reference.
is .
is .
The solution does not contain .
Create a new solution instance with the project specified updated to contain
the specified list of project references.
Id of the project whose references to replace with .
New project references.
is .
contains .
contains duplicate items.
The solution does not contain .
Create a new solution instance with the project specified updated to include the
specified metadata reference.
is .
is .
The solution does not contain .
The project already contains the specified reference.
Create a new solution instance with the project specified updated to include the
specified metadata references.
is .
contains .
contains duplicate items.
The solution does not contain .
The project already contains the specified reference.
Create a new solution instance with the project specified updated to no longer include
the specified metadata reference.
is .
is .
The solution does not contain .
The project does not contain the specified reference.
Create a new solution instance with the project specified updated to include only the
specified metadata references.
is .
contains .
contains duplicate items.
The solution does not contain .
Create a new solution instance with the project specified updated to include the
specified analyzer reference.
is .
is .
The solution does not contain .
Create a new solution instance with the project specified updated to include the
specified analyzer references.
is .
contains .
contains duplicate items.
The solution does not contain .
The project already contains the specified reference.
Create a new solution instance with the project specified updated to no longer include
the specified analyzer reference.
is .
is .
The solution does not contain .
The project does not contain the specified reference.
Create a new solution instance with the project specified updated to include only the
specified analyzer references.
is .
contains .
contains duplicate items.
The solution does not contain .
Create a new solution instance updated to include the specified analyzer reference.
is .
Create a new solution instance updated to include the specified analyzer references.
contains .
contains duplicate items.
The solution already contains the specified reference.
Create a new solution instance with the project specified updated to no longer include
the specified analyzer reference.
is .
The solution does not contain the specified reference.
Creates a new solution instance with the specified analyzer references.
contains .
contains duplicate items.
Creates a new solution instance with the corresponding project updated to include a new
document instance defined by its name and text.
Creates a new solution instance with the corresponding project updated to include a new
document instance defined by its name and text.
Creates a new solution instance with the corresponding project updated to include a new
document instance defined by its name and root .
Creates a new solution instance with the project updated to include a new document with
the arguments specified.
Create a new solution instance with the corresponding project updated to include a new
document instanced defined by the document info.
Create a new instance with the corresponding s updated to include
the documents specified by .
A new with the documents added.
Creates a new solution instance with the corresponding project updated to include a new
additional document instance defined by its name and text.
Creates a new solution instance with the corresponding project updated to include a new
additional document instance defined by its name and text.
Creates a new solution instance with the corresponding project updated to include a new
analyzer config document instance defined by its name and text.
Creates a new Solution instance that contains a new compiler configuration document like a .editorconfig file.
Creates a new solution instance that no longer includes the specified document.
Creates a new solution instance that no longer includes the specified documents.
Creates a new solution instance that no longer includes the specified additional document.
Creates a new solution instance that no longer includes the specified additional documents.
Creates a new solution instance that no longer includes the specified .
Creates a new solution instance that no longer includes the specified s.
Creates a new solution instance with the document specified updated to have the new name.
Creates a new solution instance with the document specified updated to be contained in
the sequence of logical folders.
Creates a new solution instance with the document specified updated to have the specified file path.
Creates a new solution instance with the document specified updated to have the text
specified.
Creates a new solution instance with the additional document specified updated to have the text
specified.
Creates a new solution instance with the analyzer config document specified updated to have the text
supplied by the text loader.
Creates a new solution instance with the document specified updated to have the text
and version specified.
Creates a new solution instance with the additional document specified updated to have the text
and version specified.
Creates a new solution instance with the analyzer config document specified updated to have the text
and version specified.
Creates a new solution instance with the document specified updated to have a syntax tree
rooted by the specified syntax node.
.
Creates a new solution instance with the document specified updated to have the source
code kind specified.
Creates a new solution instance with the document specified updated to have the text
supplied by the text loader.
Creates a new solution instance with the additional document specified updated to have the text
supplied by the text loader.
Creates a new solution instance with the analyzer config document specified updated to have the text
supplied by the text loader.
Returns a solution instance where every project is frozen at whatever current state it is in
Creates a branch of the solution that has its compilations frozen in whatever state they are in at the time,
assuming a background compiler is busy building this compilations.
A compilation for the project containing the specified document id will be guaranteed to exist with
at least the syntax tree for the document.
This not intended to be the public API, use Document.WithFrozenPartialSemantics() instead.
Returns one of any of the related documents of . Importantly, this will never
return (unlike which includes the original
file in the result).
A hint on the first project to search when looking for related
documents. Must not be the project that is from.
Formerly, returned a copy of the solution isolated from the original so that they do not share computed state. It now does nothing.
Creates a new solution instance with all the documents specified updated to have the same specified text.
Returns a new Solution that will always produce a specific output for a generated file. This is used only in the
implementation of where if a user has a source
generated file open, we need to make sure everything lines up.
Undoes the operation of ; any frozen source generated document is allowed
to have it's real output again.
Returns a new Solution which represents the same state as before, but with the cached generator driver state from the given project updated to match.
When generators are ran in a Solution snapshot, they may cache state to speed up future runs. For Razor, we only run their generator on forked
solutions that are thrown away; this API gives us a way to reuse that cached state in other forked solutions, since otherwise there's no way to reuse
the cached state.
Gets an objects that lists the added, changed and removed projects between
this solution and the specified solution.
Gets the set of s in this with a
that matches the given file path. This may return IDs for any type of document
including s or s.
It's possible (but unlikely) that the same file may exist as more than one type of document in the same solution. If this
were to return more than one , you should not assume that just because one is a regular source file means
that all of them would be.
Gets a that details the dependencies between projects for this solution.
Returns the options that should be applied to this solution. This is equivalent to when the
instance was created.
Analyzer references associated with the solution.
Fallback analyzer config options by language. The set of languages does not need to match the set of languages of projects included in the current solution snapshot
since these options can be updated independently of the projects contained in the solution.
Generally, the host is responsible for keeping these options up-to-date with whatever option store it maintains
and for making sure fallback options are available in the solution for all languages the host supports.
Creates a new solution instance with the specified .
Creates a new solution instance with the specified serializable .
Throws if setting the project references of project to specified
would form a cycle in project dependency graph.
Throws if setting the project references of project to specified
would form an invalid submission project chain.
Submission projects can reference at most one other submission project. Regular projects can't reference any.
Strongly held reference to the semantic models for the active document (and its related documents linked into
other projects). By strongly holding onto them, we ensure that they won't be GC'ed between feature requests
from multiple features that care about it. As the active document has the most features running on it
continuously, we definitely do not want to drop this. Note: this cached value is only to help with performance.
Not with correctness. Importantly, the concept of 'active document' is itself fundamentally racy. That's ok
though as we simply want to settle on these semantic models settling into a stable state over time. We don't
need to be perfect about it.
Tracks the changes made to a project and provides the facility to get a lazily built
compilation for that project. As the compilation is being built, the partial results are
stored as well so that they can be used in the 'in progress' workspace snapshot.
The base type of all states. The state of a starts at null, and then will progress through the other states until it
finally reaches .
Whether the generated documents in are frozen and generators should
never be ran again, ever, even if a document is later changed. This is used to ensure that when we
produce a frozen solution for partial semantics, further downstream forking of that solution won't
rerun generators. This is because of two reasons:
- Generally once we've produced a frozen solution with partial semantics, we now want speed rather
than accuracy; a generator running in a later path will still cause issues there.
- The frozen solution with partial semantics makes no guarantee that other syntax trees exist or
whether we even have references -- it's pretty likely that running a generator might produce worse results
than what we originally had.
This also controls if we will generate skeleton references for cross-language P2P references when
creating the compilation for a particular project. When entirely frozen, we do not want to do this due
to the enormous cost of emitting ref assemblies for cross language cases.
The best compilation that is available that source generators have not ran on. May be an
in-progress, full declaration, a final compilation.
A state where we are holding onto a previously built compilation, and have a known set of transformations
that could get us to a more final state.
The result of taking the original completed compilation that had generated documents and updating
them by apply the ; this is not a
correct snapshot in that the generators have not been rerun, but may be reusable if the generators
are later found to give the same output.
The list of changes that have happened since we last computed a compilation. The oldState corresponds to
the state of the project prior to the mutation.
The final state a compilation tracker reaches. At this point is now available. It is a requirement that any provided to any clients of the (for example, through
or must be from a . This is because
stores extra information in it about that compilation that the can be
queried for (for example: . If s from other s are passed out, then these other
APIs will not function correctly.
Specifies whether and all compilations it
depends on contain full information or not.
Used to determine which project an assembly symbol came from after the fact.
The final compilation, with all references and source generators run. This is distinct from , which in the case will be the
compilation before any source generators were ran. This ensures that a later invocation of the
source generators consumes which will avoid generators being ran a second
time on a compilation that already contains the output of other generators. If source generators are
not active, this is equal to .
Whether or not this final compilation state *just* generated documents which exactly correspond to the
state of the compilation. False if the generated documents came from a point in the past, and are being
carried forward until the next time we run generators.
Not held onto
Not held onto
The best generated documents we have for the current state.
The that was used for the last run, to allow for incremental reuse. May be
null if we don't have generators in the first place, haven't ran generators yet for this project, or had to
get rid of our driver for some reason.
The best generated documents we have for the current state.
The that was used for the last run, to allow for incremental reuse. May be
null if we don't have generators in the first place, haven't ran generators yet for this project, or had to
get rid of our driver for some reason.
The best generated documents we have for the current state.
The that was used for the last run, to allow for incremental reuse. May be
null if we don't have generators in the first place, haven't ran generators yet for this project, or had to
get rid of our driver for some reason.
Access via the and methods.
Intentionally not readonly. This is a mutable struct.
Set via a feature flag to enable strict validation of the compilations that are produced, in that they match the original states. This validation is expensive, so we don't want it
running in normal production scenarios.
Creates a tracker for the provided project. The tracker will be in the 'empty' state
and will have no extra information beyond the project itself.
Creates a new instance of the compilation info, retaining any already built
compilation state as the now 'old' state
Gets the final compilation if it is available.
Validates the compilation is consistent and we didn't have a bug in producing it. This only runs under a feature flag.
This is just the same as but throws a custom exception type to make this easier to find in telemetry since the exception type
is easily seen in telemetry.
Flags controlling if generator documents should be created or not.
Source generators should be run and should produce up to date results.
Source generators should not run. Whatever results were previously computed should be reused.
Flags controlling if skeleton references should be created or not.
Skeleton references should be created, and should be up to date with the project they are created for.
Skeleton references should only be created for a compilation if no existing skeleton exists for their
project from some point in the past.
Skeleton references should not be created at all.
Create up to date source generator docs and create up to date skeleton references when needed.
Do not create up to date source generator docs and do not create up to date skeleton references for P2P
references. For both, use whatever has been generated most recently.
Symbols need to be either or .
Green version of the information about this Solution instance. Responsible for non-semantic information
about the solution structure. Specifically, the set of green s, with all their
green s. Contains the attributes, options and relationships between projects.
Effectively, everything specified in a project file. Does not contain anything related to s or semantics.
Cache we use to map between unrooted symbols (i.e. assembly, module and dynamic symbols) and the project
they came from. That way if we are asked about many symbols from the same assembly/module we can answer the
question quickly after computing for the first one. Created on demand.
Same as
except that it will still fork even if newSolutionState is unchanged from .
Creates a mapping of to
Changed project id
Dependency graph
Callback to modify tracker information. Return value indicates whether the collection was modified.
Data to pass to
Creates a mapping of to
Changed project ids
Dependency graph
Callback to modify tracker information. Return value indicates whether the collection was modified.
Data to pass to
Creates a mapping of to
Callback to determine whether an item can be reused
Data to pass to
Callback to modify tracker information. Return value indicates whether the collection was modified.
Data to pass to
Map from each project to the it is currently at. Loosely, the
execution version allows us to have the generated documents for a project get fixed at some point in the past
when they were generated, up until events happen in the host that cause a need for them to be brought up to
date. This is ambient, compilation-level, information about our projects, which is why it is stored at this
compilation-state level. When syncing to our OOP process, this information is included, allowing the oop side
to move its own generators forward when a host changes these versions.
Contains information for all projects, even non-C#/VB ones. Though this will have no meaning for those project
types.
Applies an update operation to specified .
Documents may be in different projects.
Returns with projects updated to new document states specified in .
Updates the to a new state with and returns a that
reflects these changes in the project compilation.
Gets the associated with an assembly symbol.
Returns the compilation for the specified . Can return when the project
does not support compilations.
The compilation is guaranteed to have a syntax tree for each document of the project.
Returns the compilation for the specified . Can return when the project
does not support compilations.
The compilation is guaranteed to have a syntax tree for each document of the project.
Return reference completeness for the given project and all projects this references.
Returns the generated document states for source generated documents.
Returns the for a source generated document that has already been generated and observed.
This is only safe to call if you already have seen the SyntaxTree or equivalent that indicates the document state has already been
generated. This method exists to implement and is best avoided unless you're doing something
similarly tricky like that.
Get a metadata reference to this compilation info's compilation with respect to
another project. For cross language references produce a skeletal assembly. If the
compilation is not available, it is built. If a skeletal assembly reference is
needed and does not exist, it is also built.
Get a metadata reference for the project's compilation. Returns upon failure, which
can happen when trying to build a skeleton reference that fails to build.
Undoes the operation of ; any frozen source generated document is allowed
to have it's real output again.
Returns a new SolutionState that will always produce a specific output for a generated file. This is used only in the
implementation of where if a user has a source
generated file open, we need to make sure everything lines up.
Updates entries in our to the corresponding values in the
given . Importantly, must refer to projects in this solution. Projects not mentioned in
will not be touched (and they will stay in the map).
Creates a branch of the solution that has its compilations frozen in whatever state they are in at the time,
assuming a background compiler is busy building this compilations.
A compilation for the project containing the specified document id will be guaranteed to exist with at least the
syntax tree for the document.
This not intended to be the public API, use Document.WithFrozenPartialSemantics() instead.
Core helper that takes a set of s and does the application of the appropriate documents to each project.
The set of documents to add.
Creates a new solution instance with all the documents specified updated to have the same specified text.
Returns if this / could produce the
given . The symbol must be a , or .
If is true, then will not be considered
when answering this question. In other words, if is an and is then this will only
return true if the symbol is . If is
false, then it can return true if is or any
of the symbols returned by for
any of the references of the .
Updates the creation policy for this tracker. Setting it to .
Forces source generated documents to be created by dumping any existing and rerunning generators from scratch for this tracker.
Updates the creation policy for this tracker. Setting it to .
Gets the source generator files generated by this . Controls whether frozen source generated documents are included
in the result. If this will call all the way through to the most underlying to get its generated documents. If this is then
this will be those same generated documents, along with all the generated documents from all wrapping 's frozen documents overlaid on top.
Information maintained for unrooted symbols.
The project the symbol originated from, i.e. the symbol is defined in the project or its metadata reference.
The Compilation that produced the symbol.
If the symbol is defined in a metadata reference of , information about the
reference.
Information maintained for unrooted symbols.
The project the symbol originated from, i.e. the symbol is defined in the project or its metadata reference.
The Compilation that produced the symbol.
If the symbol is defined in a metadata reference of , information about the
reference.
The project the symbol originated from, i.e. the symbol is defined in the project or its metadata reference.
The Compilation that produced the symbol.
If the symbol is defined in a metadata reference of , information about the
reference.
A helper type for mapping back to an originating /.
In IDE scenarios we have the need to map from an to the that
contained a that could have produced that symbol. This is especially needed with OOP
scenarios where we have to communicate to OOP from VS (And vice versa) what symbol we are referring to. To do
this, we pass along a project where this symbol could be found, and enough information (a ) to resolve that symbol back in that that .
The s or s produced through for all the references exposed by . Sorted by the hash code produced by so that it can be binary searched efficiently.
Caches the skeleton references produced for a given project/compilation under the varying it might be referenced by. Skeletons are used in the compilation tracker
to allow cross-language project references with live semantic updating between VB/C# and vice versa.
Specifically, in a cross language case we will build a skeleton ref for the referenced project and have the
referrer use that to understand its semantics.
This approach works, but has the caveat that live cross-language semantics are only possible when the skeleton
assembly can be built. This should always be the case for correct code, but it may not be the case for code
with errors depending on if the respective language compiler is resilient to those errors or not. In that case
though where the skeleton cannot be built, this type provides mechanisms to fallback to the last successfully
built skeleton so that a somewhat reasonable experience can be maintained. If we failed to do this and instead
returned nothing, a user would find that practically all semantic experiences that depended on that particular
project would fail or be seriously degraded (e.g. diagnostics). To that end, it's better to limp along with
stale date, then barrel on ahead with no data.
The implementation works by keeping metadata references around associated with a specific for a project. As long as the for
that project is the same, then all the references of it can be reused. When an forks itself, it will also this, allowing previously computed
references to be used by later forks. However, this means that later forks (esp. ones that fail to produce a
skeleton, or which produce a skeleton for different semantics) will not leak backward to a prior , causing it to see a view of the world inapplicable to its current snapshot. A downside
of this is that if a fork happens to a compilation tracker *prior* to the skeleton for it being computed, then
when the skeleton is actually produced it won't be shared forward. In practice the hope is that this is rare,
and that eventually the compilation trackers will have computed the skeleton and will be able to pass it forward
from that point onwards.
The cached data we compute is associated with a particular compilation-tracker. Because of this, once we
compute the skeleton information for that tracker, we hold onto it for as long as the tracker is itself alive.
The presumption here is that once created, it will likely be needed in the future as well as there will still be
downstream projects of different languages that reference this. The only time this won't hold true is if there
was a cross language p2p ref, but then it gets removed from the solution. However, this sort of change should
be rare in a solution, so it's unlikely to happen much, and the only negative is holding onto a little bit more
memory.
Note: this is a mutable struct that updates itself in place atomically. As such, it should never be copied by
consumers (hence the restriction). Consumers wanting to make a copy should
only do so by calling .
We don't want to include private members for several reasons. First, it provides more opportunity to fail
to generate the skeleton reference. Second, it adds much more perf cost having to bind and emit all those
members. Finally, those members serve no purpose as within the IDE we don't even load privates from metadata
in our compilations. So this information doesn't even end up supporting any scenarios. Note: Due to not
loading privates, it means that if a cross language call references something private, you'll get an error,
but go-to-def won't work. That not ideal, but not the end of the world. And the cost needed to support
that working is simply too high (both on emit and on load) to be worthwhile.
Lock around and to ensure they are updated/read
in an atomic fashion. Static to keep this only as a single allocation. As this is only for reading/writing
very small pieces of data, this is fine.
Static conditional mapping from a compilation to the skeleton set produced for it. This is valuable for a
couple of reasons. First, a compilation tracker may fork, but produce the same compilation. As such, we
want to get the same skeleton set for it. Second, consider the following scenario:
- Project A is referenced by projects B and C (both have a different language than A).
- Producing the compilation for 'B' produces the compilation for 'A' which produces the skeleton that 'B' references.
- B's compilation is released and then GC'ed.
- Producing the compilation for 'C' needs the skeleton from 'A'
At this point we would not want to re-emit the assembly metadata for A's compilation. We already did that
for 'B', and it can be enormously expensive to do so again. So as long as A's compilation lives, we really
want to keep it's skeleton cache around.
The version of the project that the corresponds to. Initially set to .
Mapping from metadata-reference-properties to the actual metadata reference for them.
Produces a copy of the , allowing forks of to
reuse s when their dependent semantic version matches ours. In the case
where the version is different, then the clone will attempt to make a new skeleton reference for that
version. If it succeeds, it will use that. If it fails however, it can still use our skeletons.
The actual assembly metadata produced from another compilation.
The documentation provider used to lookup xml docs for any metadata reference we pass out. See
docs on for why this is safe to hold onto despite it
rooting a compilation internally.
The actual assembly metadata produced from another compilation.
The documentation provider used to lookup xml docs for any metadata reference we pass out. See
docs on for why this is safe to hold onto despite it
rooting a compilation internally.
Lock this object while reading/writing from it. Used so we can return the same reference for the same
properties. While this is isn't strictly necessary (as the important thing to keep the same is the
AssemblyMetadata), this allows higher layers to see that reference instances are the same which allow
reusing the same higher level objects (for example, the set of references a compilation has).
Represents a change that needs to be made to a , , or both
in response to some user edit.
The original state of the project prior to the user edit.
The state of the project after the user edit was made.
Whether or not can be called on Compilations that may contain
generated documents.
Most translation actions add or remove a single syntax tree which means we can do the "same" change
to a compilation that contains the generated files and one that doesn't; however some translation actions
(like ) will unilaterally remove all trees, and would have unexpected
side effects. This opts those out of operating on ones with generated documents where there would be side effects.
When changes are made to a solution, we make a list of translation actions. If multiple similar changes happen in rapid
succession, we may be able to merge them without holding onto intermediate state.
The action prior to this one. May be a different type.
A non-null if we could create a merged one, null otherwise.
Replacing a single tree doesn't impact the generated trees in a compilation, so we can use this against
compilations that have generated trees.
Updating editorconfig document updates .
Amount to break batches of documents into. That allows us to process things in parallel, without also
creating too many individual actions that then need to be processed.
An implementation of that takes a compilation from another compilation tracker
and updates it to return a generated document with a specific content, regardless of what the generator actually
produces. In other words, it says "take the compilation this other thing produced, and pretend the generator
gave this content, even if it wouldn't." This is used by to ensure that a particular solution snapshot contains a
pre-existing generated document from a prior run that the user is interacting with in the host. The current
snapshot might not produce the same content from before (or may not even produce that document anymore). But we
want to still let the user work with that doc effectively up until the point that new generated documents are
produced and replace it in the host view.
The lazily-produced compilation that has the generated document updated. This is initialized by call to
.
Intentionally not readonly as this is a mutable struct.
Checksum representing the full checksum tree for this solution compilation state. Includes the checksum for
, as well as the checksums for
if present.
Mapping from project-id to the checksums needed to synchronize it over to an OOP host. Lock this specific
field before reading/writing to it.
Gets the checksum for only the requested project (and any project it depends on)
Gets the checksum for only the requested project (and any project it depends on)
Cached mapping from language (only C#/VB since those are the only languages that support analyzers) to the lists
of analyzer references (see ) to all the s produced by those references. This should only be created and cached on the OOP side
of things so that we don't cause source generators to be loaded (and fixed) within VS (which is .net framework
only).
Cached information about if a project has source generators or not. Note: this is distinct from as we want to be able to compute it by calling over to our OOP
process (if present) and having it make the determination, without the host necessarily loading generators
itself.
This method should only be called in a .net core host like our out of process server.
This method should only be called in a .net core host like our out of process server.
An identifier that can be used to refer to the same Solution across versions.
The unique id of the solution.
Create a new Solution Id
An optional name to make this id easier to recognize while debugging.
A class that represents all the arguments necessary to create a new solution instance.
The unique Id of the solution.
The version of the solution.
The path to the solution file, or null if there is no solution file.
A list of projects initially associated with the solution.
The analyzers initially associated with this solution.
Per-language analyzer config options that are used as a fallback if the option is not present in produced by the compiler.
Implements a top-level (but not global) virtual editorconfig file that's in scope for all source files of the solution.
Create a new instance of a SolutionInfo.
Create a new instance of a SolutionInfo.
Create a new instance of a SolutionInfo.
type that contains information regarding this solution itself but
no tree information such as project info
type that contains information regarding this solution itself but
no tree information such as project info
The unique Id of the solution.
The version of the solution.
The path to the solution file, or null if there is no solution file.
The id report during telemetry events.
Represents a set of projects and their source code documents.
this is a green node of Solution like ProjectState/DocumentState are for
Project and Document.
String comparer for file paths that caches the last result of the comparison to avoid expensive rehashing of the
same string over and over again.
Note: this insensitive comparer is busted on many systems. But we do things this way for compat with the logic
we've had on windows since forever.
ThreadStatic so that gets its own copy it can safely read/write from, removing the need for expensive
contentious locks. The purpose of this type is to allow lookup of the same key across N dictionaries
efficiently from the same thread. So this accomplishes that purpose.
Returns true iff the UInt32 represents two ASCII UTF-16 characters in machine endianness.
Fallback analyzer config options by language. The set of languages does not need to match the set of langauges of projects included in the surrent solution snapshot.
Number of projects in the solution of the given language. The value is guaranteed to always be greater than zero.
If the project count does ever hit zero then there simply is no key/value pair for that language in this map.
The Id of the solution. Multiple solution instances may share the same Id.
The path to the solution file or null if there is no solution file.
The solution version. This equates to the solution file's version.
A list of all the ids for all the projects contained by the solution.
Updates the solution with specified workspace kind, workspace version and services.
This implicitly also changes the value of for this solution,
since that is extracted from for backwards compatibility.
The version of the most recently modified project.
True if the solution contains a project with the specified project ID.
True if the solution contains the document in one of its projects
True if the solution contains the additional document in one of its projects
True if the solution contains the analyzer config document in one of its projects
Create a new solution instance that includes projects with the specified project information.
Create a new solution instance without the projects specified.
Creates a new solution instance with the project specified updated to have the new
assembly name.
Creates a new solution instance with the project specified updated to have the output file path.
Creates a new solution instance with the project specified updated to have the output file path.
Creates a new solution instance with the project specified updated to have the compiler output file path.
Creates a new solution instance with the project specified updated to have the default namespace.
Creates a new solution instance with the project specified updated to have the name.
Creates a new solution instance with the project specified updated to have the name.
Creates a new solution instance with the project specified updated to have the project file path.
Create a new solution instance with the project specified updated to have
the specified compilation options.
Create a new solution instance with the project specified updated to have
the specified parse options.
Create a new solution instance with the project specified updated to have
the specified hasAllInformation.
Create a new solution instance with the project specified updated to have
the specified runAnalyzers.
Create a new solution instance with the project specified updated to have
the specified hasSdkCodeStyleAnalyzers.
Create a new solution instance with the project specified updated to include
the specified project references.
Create a new solution instance with the project specified updated to no longer
include the specified project reference.
Create a new solution instance with the project specified updated to contain
the specified list of project references.
Creates a new solution instance with the project documents in the order by the specified document ids.
The specified document ids must be the same as what is already in the project; no adding or removing is allowed.
Create a new solution instance with the project specified updated to include the
specified metadata references.
Create a new solution instance with the project specified updated to no longer include
the specified metadata reference.
Create a new solution instance with the project specified updated to include only the
specified metadata references.
Create a new solution instance with the project specified updated to include only the
specified analyzer references.
Creates a new solution instance with updated analyzer fallback options.
Creates a new solution instance with an attribute of the document updated, if its value has changed.
Creates a new solution instance with the document specified updated to have the text
specified.
Creates a new solution instance with the additional document specified updated to have the text
specified.
Creates a new solution instance with the document specified updated to have the text
specified.
Creates a new solution instance with the document specified updated to have the text
and version specified.
Creates a new solution instance with the additional document specified updated to have the text
and version specified.
Creates a new solution instance with the analyzer config document specified updated to have the text
and version specified.
Creates a new solution instance with the document specified updated to have the source
code kind specified.
Creates a new solution instance with the additional document specified updated to have the text
supplied by the text loader.
Creates a new solution instance with the analyzer config document specified updated to have the text
supplied by the text loader.
Creates a new snapshot with an updated project and an action that will produce a new
compilation matching the new project out of an old compilation. All dependent projects
are fixed-up if the change to the new project affects its public metadata, and old
dependent compilations are forgotten.
Gets a that details the dependencies between projects for this solution.
Checksum representing the full checksum tree for this solution compilation state. Includes the checksum for
.
Mapping from project-id to the checksums needed to synchronize it (and the projects it depends on) over
to an OOP host. Lock this specific field before reading/writing to it.
Gets the checksum for only the requested project (and any project it depends on)
Gets the checksum for only the requested project (and any project it depends on)
Cone of projects to compute a checksum for. Pass in to get a
checksum for the entire solution
A that was generated by an .
A small struct that holds the values that define the identity of a source generated document, and don't change
as new generations happen. This is mostly for convenience as we are reguarly working with this combination of values.
Backing store for .
It's reasonable to capture 'text' here and keep it alive. We're already holding onto the generated text
strongly in the ConstantTextAndVersionSource we're passing to our base type.
Checksum of when it was originally created. This is subtly, but importantly
different from the checksum acquired from . Specifically, the original
source text may have been created from a in a lossy fashion (for example,
removing BOM marks and the like) on the OOP side. As such, its checksum might not be reconstructible from the
actual text and hash algorithm that were used to create the SourceText on the host side. To ensure both the
host and OOP are in agreement about the true content checksum, we store this separately.
This is modeled after , but sets
to for source generated
documents.
Represents the version of source generator execution that a project is at. Source generator results are kept around
as long as this version stays the same and we are in
mode. This has no effect when in mode (as we always rerun
generators on any change). This should effectively be used as a monotonically increasing value.
Controls the major version of source generation execution. When this changes the
generator driver should be dropped and all generation should be rerun.
Controls the minor version of source generation execution. When this changes the
generator driver can be reused and should incrementally determine what the new generated documents should be.
Represents the version of source generator execution that a project is at. Source generator results are kept around
as long as this version stays the same and we are in
mode. This has no effect when in mode (as we always rerun
generators on any change). This should effectively be used as a monotonically increasing value.
Controls the major version of source generation execution. When this changes the
generator driver should be dropped and all generation should be rerun.
Controls the minor version of source generation execution. When this changes the
generator driver can be reused and should incrementally determine what the new generated documents should be.
Controls the major version of source generation execution. When this changes the
generator driver should be dropped and all generation should be rerun.
Controls the minor version of source generation execution. When this changes the
generator driver can be reused and should incrementally determine what the new generated documents should be.
Helper construct to allow a mapping from s to .
Limited to just the surface area the workspace needs.
Helper construct to allow a mapping from s to .
Limited to just the surface area the workspace needs.
Assembly path is used as a part of a generator identity to deal with the case that the user accidentally installed
the same generator twice from two different paths, or actually has two different generators that just happened to
use the same name. In the wild we've seen cases where a user has a broken project or build that results in the same
generator being added twice; we aren't going to try to deduplicate those anywhere since currently the compiler
does't do any deduplication either: you'll simply get duplicate outputs which might collide and cause compile
errors. If https://github.com/dotnet/roslyn/issues/56619 is addressed, we can potentially match the compiler
behavior by taking a different approach here.
Assembly path is used as a part of a generator identity to deal with the case that the user accidentally installed
the same generator twice from two different paths, or actually has two different generators that just happened to
use the same name. In the wild we've seen cases where a user has a broken project or build that results in the same
generator being added twice; we aren't going to try to deduplicate those anywhere since currently the compiler
does't do any deduplication either: you'll simply get duplicate outputs which might collide and cause compile
errors. If https://github.com/dotnet/roslyn/issues/56619 is addressed, we can potentially match the compiler
behavior by taking a different approach here.
A class that represents both a source text and its version stamp.
The source text.
The version of the source text
Obsolete.
If an error occurred while loading the text the corresponding diagnostic, otherwise null.
Create a new instance.
The text
The version
Obsolete.
Create a new instance.
The text
The version
Diagnostic describing failure to load the source text.
A bitwise combination of the enumeration values to use when computing differences with
.
Since computing differences can be slow with large data sets, you should not use the Character type
unless the given text is relatively small.
Compute the line difference.
Compute the word difference.
Compute the character difference.
The project this document belongs to.
The document's identifier. Many document instances may share the same ID, but only one
document in a solution may have that ID.
The path to the document file or null if there is no document file.
The name of the document.
The sequence of logical folders the document is contained in.
A associated with this document
Get the current text for the document if it is already loaded and available.
Gets the version of the document's text if it is already loaded and available.
Gets the current text for the document asynchronously.
Fetches the current text for the document synchronously.
This is internal for the same reason is internal:
we have specialized cases where we need it, but we worry that making it public will do more harm than good.
Gets the version of the document's text.
Fetches the current version for the document synchronously.
This is internal for the same reason is internal:
we have specialized cases where we need it, but we worry that making it public will do more harm than good.
Gets the version of the document's top level signature.
True if the info of the document change (name, folders, file path; not the content).
Only checks if the source of the text has changed, no content check is done.
Indicates kind of a
Indicates a regular source
Indicates an
Indicates an
Only checks if the source of the text has changed, no content check is done.
Holds on a to map and an ordering.
s in the order in which they were added to the project (the compilation order).
States ordered by .
The entries in the map are sorted by , which yields locally deterministic order but not the order that
matches the order in which documents were added. Therefore this ordering can't be used when creating compilations and it can't be
used when persisting document lists that do not preserve the GUIDs.
Get states ordered in compilation order.
Returns a s of documents whose state changed when compared to older states.
Returns a s of added documents.
Returns a s of removed documents.
A class that represents access to a source text and its version from a storage location.
if the document that holds onto this loader should do so with a strong reference, versus
a reference that will take the contents of this loader and store them in a recoverable form (e.g. a memory
mapped file within the same process). This should be used when the underlying data is already stored
in a recoverable form somewhere else and it would be wasteful to store another copy. For example, a document
that is backed by memory-mapped contents in another process does not need to dump it's content to
another memory-mapped file in the process it lives in. It can always recover the text from the original
process.
True if reloads from its original binary representation (e.g. file on disk).
Load a text and a version of the document.
Implementations of this method should use when creating from an original binary representation and
ignore it otherwise.
Callers of this method should pass specifying the desired properties of . The implementation may return a
that does not satisfy the given requirements. For example, legacy types that do not override this method would ignore all .
Cancellation token.
Load a text and a version of the document.
Obsolete. Null.
Obsolete. Null.
Load a text and a version of the document in the workspace.
Creates a new from an already existing source text and version.
Creates a from a and version.
The text obtained from the loader will be the current text of the container at the time
the loader is accessed.
A class that represents both a tree and its top level signature version
A class that represents both a tree and its top level signature version
The syntax tree
The version of the top level signature of the tree
True if can be reloaded.
Retrieves the underlying if that's what this was
created from and still has access to.
Retrieves just the version information from this instance. Cheaper than when only
the version is needed, and avoiding loading the text is desirable.
Similar to , but for trees. Allows hiding (or introspecting) the details of how
a tree is created for a particular document.
Strong reference to the loaded text and version. Only held onto once computed if . is . Once held onto, this will be returned from all calls to
, or . Once non-null will always
remain non-null.
Weak reference to the loaded text and version that we create whenever the value is computed. We will
attempt to return from this if still alive when clients call back into this. If neither this, nor are available, the value will be reloaded. Once non-null, this will always be non-null.
A recoverable TextAndVersion source that saves its text to temporary storage.
A recoverable TextAndVersion source that saves its text to temporary storage.
True if the is available, false if is returned.
Attempt to return the original loader if we still have it.
This class holds onto a value weakly, but can save its value and recover it on demand
if needed. The value is initially strongly held, until the first time that or is called. At that point, it will be dumped to secondary storage, and retrieved and
weakly held from that point on in the future.
Lazily created. Access via the property.
Whether or not we've saved our value to secondary storage. Used so we only do that once.
Initial strong reference to the SourceText this is initialized with. Will be used to respond to the first
request to get the value, at which point it will be dumped into secondary storage.
Weak reference to the value last returned from this value source. Will thus return the same value as long
as something external is holding onto it.
Attempts to get the value, but only through the weak reference. This will only succeed *after* the value
has been retrieved at least once, and has thus then been save to secondary storage.
Attempts to get the value, either through our strong or weak reference.
Kicks off the work to save this instance to secondary storage at some point in the future. Once that save
occurs successfully, we will drop our cached data and return values from that storage instead.
Simple implementation of backed by an opaque ."/>
VersionStamp should be only used to compare versions returned by same API.
global counter to avoid collision within same session.
it starts with a big initial number just for a clarity in debugging
time stamp
indicate whether there was a collision on same item
unique version in same session
Creates a new instance of a VersionStamp.
Creates a new instance of a version stamp based on the specified DateTime.
compare two different versions and return either one of the versions if there is no collision, otherwise, create a new version
that can be used later to compare versions between different items
Gets a new VersionStamp that is guaranteed to be newer than its base one
this should only be used for same item to move it to newer version
Returns the serialized text form of the VersionStamp.
True if this VersionStamp is newer than the specified one.
Gets the documents from the corresponding workspace's current solution that are associated with the source text's container,
updated to contain the same text as the source if necessary.
Gets the from the corresponding workspace's current solution that is associated with the source text's container
in its current project context, updated to contain the same text as the source if necessary.
Gets the from the corresponding workspace's current solution that is associated with the source text's container
in its current project context, updated to contain the same text as the source if necessary.
Gets the documents from the corresponding workspace's current solution that are associated with the text container.
Gets the document from the corresponding workspace's current solution that is associated with the text container
in its current project context.
Tries to get the document corresponding to the text from the current partial solution
associated with the text's container. If the document does not contain the exact text a document
from a new solution containing the specified text is constructed. If no document is associated
with the specified text's container, or the text's container isn't associated with a workspace,
then the method returns false.
Hash algorithms supported by the debugger used for source file checksums stored in the PDB.
Defines a source hash algorithm constant we can re-use when creating source texts for open documents.
This ensures that both LSP and documents opened as a text buffer are created with the same checksum algorithm
so that we can compare their contents using checksums later on.
Encoding to use when there is no byte order mark (BOM) on the stream. This encoder may throw a
if the stream contains invalid UTF-8 bytes.
Encoding to use when UTF-8 fails. We try to find the following, in order, if available:
1. The default ANSI codepage
2. CodePage 1252.
3. Latin1.
Initializes an instance of from the provided stream. This version differs
from in two ways:
1. It attempts to minimize allocations by trying to read the stream into a byte array.
2. If is null, it will first try UTF-8 and, if that fails, it will
try CodePage 1252. If CodePage 1252 is not available on the system, then it will try Latin1.
The stream containing encoded text.
Specifies an encoding to be used if the actual encoding can't be determined from the stream content (the stream doesn't start with Byte Order Mark).
If not specified auto-detect heuristics are used to determine the encoding. If these heuristics fail the decoding is assumed to be Encoding.Default.
Note that if the stream starts with Byte Order Mark the value of is ignored.
Indicates if the file can be embedded in the PDB.
Hash algorithm used to calculate document checksum.
The stream content can't be decoded using the specified , or
is null and the stream appears to be a binary file.
An IO error occurred while reading from the stream.
Try to create a from the given stream using the given encoding.
The input stream containing the encoded text. The stream will not be closed.
The expected encoding of the stream. The actual encoding used may be different if byte order marks are detected.
The checksum algorithm to use.
Throw if binary (non-text) data is detected.
Indicates if the text can be embedded in the PDB.
The decoded from the stream.
The decoder was unable to decode the stream with the given encoding.
Error reading from stream.
Some streams are easily represented as bytes.
The stream
The bytes, if available.
True if the stream's bytes could easily be read, false otherwise.
Read the contents of a FileStream into a byte array.
The FileStream with encoded text.
A byte array filled with the contents of the file.
True if a byte array could be created.
A workspace provides access to a active set of source code projects and documents and their
associated syntax trees, compilations and semantic models. A workspace has a current solution
that is an immutable snapshot of the projects and documents. This property may change over time
as the workspace is updated either from live interactions in the environment or via call to the
workspace's method.
Current solution. Must be locked with when writing to it.
Determines whether changes made to unchangeable documents will be silently ignored or cause exceptions to be thrown
when they are applied to workspace via .
A document is unchangeable if is false.
Constructs a new workspace instance.
The this workspace uses
A string that can be used to identify the kind of workspace. Usually this matches the name of the class.
Services provider by the host for implementing workspace features.
Override this property if the workspace supports partial semantics for documents.
The kind of the workspace.
This is generally if originating from the host environment, but may be
any other name used for a specific kind of workspace.
Create a new empty solution instance associated with this workspace.
Create a new empty solution instance associated with this workspace, and with the given options.
Create a new empty solution instance associated with this workspace.
The current solution.
The solution is an immutable model of the current set of projects and source documents.
It provides access to source text, syntax trees and semantics.
This property may change as the workspace reacts to changes in the environment or
after is called.
Sets the of this workspace. This method does not raise a event.
This method does not guarantee that linked files will have the same contents. Callers
should enforce that policy before passing in the new solution.
Sets the of this workspace. This method does not raise a event. This method should be used sparingly. As much as possible,
derived types should use the SetCurrentSolution overloads that take a transformation.
This method does not guarantee that linked files will have the same contents. Callers
should enforce that policy before passing in the new solution.
Applies specified transformation to , updates to
the new value and raises a workspace change event of the specified kind. All linked documents in the
solution (which normally will have the same content values) will be updated to to have the same content
*identity*. In other words, they will point at the same instances,
allowing that memory to be shared.
Solution transformation.
The kind of workspace change event to raise. The id of the project updated by
to be passed to the workspace change event. And the id of the document
updated by to be passed to the workspace change event.
True if was set to the transformed solution, false if the
transformation did not change the solution.
Ensures that whenever a new language is added to we
allow the host to initialize for that language.
Conversely, if a language is no longer present in
we clear out its .
This mechanism only takes care of flowing the initial snapshot of option values.
It's up to the host to keep the individual values up-to-date by updating
as appropriate.
Implementing the initialization here allows us to uphold an invariant that
the host had the opportunity to initialize
of any snapshot stored in .
Applies specified transformation to , updates to
the new value and performs a requested callback immediately before and after that update. The callbacks
will be invoked atomically while while is being held.
Solution transformation. This may be run multiple times. As such it should be
a purely functional transformation on the solution instance passed to it. It should not make stateful
changes elsewhere.
if this operation may raise observable events;
otherwise, . If , the operation will call
to ensure listeners are registered prior to callbacks that may raise
events.
Action to perform immediately prior to updating .
The action will be passed the old that will be replaced and the exact solution
it will be replaced with. The latter may be different than the solution returned by as it will have its updated
accordingly. This will only be run once.
Action to perform once has been updated. The
action will be passed the old that was just replaced and the exact solution it
was replaced with. The latter may be different than the solution returned by as it will have its updated
accordingly. This will only be run once.
Gets or sets the set of all global options and .
Setter also force updates the to have the updated .
Executes an action as a background task, as part of a sequential queue of tasks.
Execute a function as a background task, as part of a sequential queue of tasks.
Override this method to act immediately when the text of a document has changed, as opposed
to waiting for the corresponding workspace changed event to fire asynchronously.
Override this method to act immediately when a document is closing, as opposed
to waiting for the corresponding workspace changed event to fire asynchronously.
Clears all solution data and empties the current solution.
Used so that while disposing we can clear the solution without issuing more
events. As we are disposing, we don't want to cause any current listeners to do work on us as we're in the
process of going away.
This method is called when a solution is cleared.
Override this method if you want to do additional work when a solution is cleared. Call the base method at
the end of your method.
This method is called while a lock is held. Be very careful when overriding as innapropriate work can cause deadlocks.
This method is called when an individual project is removed.
Override this method if you want to do additional work when a project is removed.
Call the base method at the end of your method.
This method is called to clear an individual document is removed.
Override this method if you want to do additional work when a document is removed.
Call the base method at the end of your method.
Disposes this workspace. The workspace can longer be used after it is disposed.
Call this method when the workspace is disposed.
Override this method to do additional work when the workspace is disposed.
Call this method at the end of your method.
Call this method to respond to a solution being opened in the host environment.
Call this method to respond to a solution being reloaded in the host environment.
This method is called when the solution is removed from the workspace.
Override this method if you want to do additional work when the solution is removed.
Call the base method at the end of your method.
Call this method to respond to a solution being removed/cleared/closed in the host environment.
Call this method to respond to a project being added/opened in the host environment.
Call this method to respond to a project being reloaded in the host environment.
Call this method to respond to a project being removed from the host environment.
Currently projects can always be removed, but this method still exists because it's protected and we don't
want to break people who may have derived from and either called it, or overridden it.
Call this method when a project's assembly name is changed in the host environment.
Call this method when a project's output file path is changed in the host environment.
Call this method when a project's output ref file path is changed in the host environment.
Call this method when a project's name is changed in the host environment.
Call this method when a project's default namespace is changed in the host environment.
Call this method when a project's compilation options are changed in the host environment.
Call this method when a project's parse options are changed in the host environment.
Call this method when a project reference is added to a project in the host environment.
Call this method when a project reference is removed from a project in the host environment.
Call this method when a metadata reference is added to a project in the host environment.
Call this method when a metadata reference is removed from a project in the host environment.
Call this method when an analyzer reference is added to a project in the host environment.
Call this method when an analyzer reference is removed from a project in the host environment.
Call this method when an analyzer reference is added to a project in the host environment.
Call this method when an analyzer reference is removed from a project in the host environment.
Call this method when change in the host environment.
Call this method when status of project has changed to incomplete.
See for more information.
Call this method when a project's RunAnalyzers property is changed in the host environment.
Call this method when a document is added to a project in the host environment.
Call this method when multiple document are added to one or more projects in the host environment.
Call this method when a document is reloaded in the host environment.
Call this method when a document is removed from a project in the host environment.
Call this method when the document info changes, such as the name, folders or file path.
Call this method when the text of a document is updated in the host environment.
Call this method when the text of an additional document is updated in the host environment.
Call this method when the text of an analyzer config document is updated in the host environment.
Call this method when the text of a document is changed on disk.
Call this method when the text of a document is changed on disk.
Call this method when the text of a additional document is changed on disk.
Call this method when the text of a analyzer config document is changed on disk.
When a s text is changed, we need to make sure all of the linked files also have their
content updated in the new solution before applying it to the workspace to avoid the workspace having
solutions with linked files where the contents do not match.
Allow caller to indicate behavior that should happen if this is a
request to update a document not currently in the workspace. This should be used only in hosts where there
may be disparate sources of text change info, without an underlying agreed upon synchronization context to
ensure consistency between events. For example, in an LSP server it might be the case that some events were
being posted by an attached lsp client, while another source of events reported information produced by a
self-hosted project system. These systems might report events on entirely different cadences, leading to
scenarios where there might be disagreements as to the state of the workspace. Clients in those cases must
be resilient to those disagreements (for example, by falling back to a misc-workspace if the lsp client
referred to a document no longer in the workspace populated by the project system).
Call this method when the SourceCodeKind of a document changes in the host environment.
Call this method when an additional document is added to a project in the host environment.
Call this method when an additional document is removed from a project in the host environment.
Call this method when an analyzer config document is added to a project in the host environment.
Call this method when an analyzer config document is removed from a project in the host environment.
Updates all projects to properly reference other projects as project references instead of metadata references.
Determines if the specific kind of change is supported by the method.
Returns if a reference to referencedProject can be added to
referencingProject. otherwise.
Apply changes made to a solution back to the workspace.
The specified solution must be one that originated from this workspace. If it is not, or the workspace
has been updated since the solution was obtained from the workspace, then this method returns false. This method
will still throw if the solution contains changes that are not supported according to the
method.
Thrown if the solution contains changes not supported according to the
method.
Called during a call to to determine if a specific change to is allowed.
This method is only called if returns false for .
If returns true, then that means all changes are allowed and this method does not need to be called.
The old of the project from prior to the change.
The new of the project that was passed to .
The project contained in the passed to .
Called during a call to to determine if a specific change to is allowed.
This method is only called if returns false for .
If returns true, then that means all changes are allowed and this method does not need to be called.
The old of the project from prior to the change.
The new of the project that was passed to .
The project contained in the passed to .
This method is called during for each project
that has been added, removed or changed.
Override this method if you want to modify how project changes are applied.
This method is called during to add a project to the current solution.
Override this method to implement the capability of adding projects.
This method is called during to remove a project from the current solution.
Override this method to implement the capability of removing projects.
This method is called during to change the compilation options.
Override this method to implement the capability of changing compilation options.
This method is called during to change the parse options.
Override this method to implement the capability of changing parse options.
This method is called during to add a project reference to a project.
Override this method to implement the capability of adding project references.
This method is called during to remove a project reference from a project.
Override this method to implement the capability of removing project references.
This method is called during to add a metadata reference to a project.
Override this method to implement the capability of adding metadata references.
This method is called during to remove a metadata reference from a project.
Override this method to implement the capability of removing metadata references.
This method is called during to add an analyzer reference to a project.
Override this method to implement the capability of adding analyzer references.
This method is called during to remove an analyzer reference from a project.
Override this method to implement the capability of removing analyzer references.
This method is called during to add an analyzer reference to the solution.
Override this method to implement the capability of adding analyzer references.
This method is called during to remove an analyzer reference from the solution.
Override this method to implement the capability of removing analyzer references.
This method is called during to add a new document to a project.
Override this method to implement the capability of adding documents.
This method is called during to remove a document from a project.
Override this method to implement the capability of removing documents.
This method is called to change the text of a document.
Override this method to implement the capability of changing document text.
This method is called to change the info of a document.
Override this method to implement the capability of changing a document's info.
This method is called during to add a new additional document to a project.
Override this method to implement the capability of adding additional documents.
This method is called during to remove an additional document from a project.
Override this method to implement the capability of removing additional documents.
This method is called to change the text of an additional document.
Override this method to implement the capability of changing additional document text.
This method is called during to add a new analyzer config document to a project.
Override this method to implement the capability of adding analyzer config documents.
This method is called during to remove an analyzer config document from a project.
Override this method to implement the capability of removing analyzer config documents.
This method is called to change the text of an analyzer config document.
Override this method to implement the capability of changing analyzer config document text.
Throws an exception is the solution is not empty.
Throws an exception if the project is not part of the current solution.
Throws an exception is the project is part of the current solution.
Throws an exception if a project does not have a specific project reference.
Throws an exception if a project already has a specific project reference.
Throws an exception if project has a transitive reference to another project.
Throws an exception if a project does not have a specific metadata reference.
Throws an exception if a project already has a specific metadata reference.
Throws an exception if a project does not have a specific analyzer reference.
Throws an exception if a project already has a specific analyzer reference.
Throws an exception if a project already has a specific analyzer reference.
Throws an exception if a project already has a specific analyzer reference.
Throws an exception if a document is not part of the current solution.
Throws an exception if an additional document is not part of the current solution.
Throws an exception if an analyzer config is not part of the current solution.
Throws an exception if a document is already part of the current solution.
Throws an exception if an additional document is already part of the current solution.
Throws an exception if the analyzer config document is already part of the current solution.
Gets the name to use for a project in an error message.
Gets the name to use for a document in an error message.
Gets the name to use for an additional document in an error message.
Gets the name to use for an analyzer document in an error message.
A class that responds to text buffer changes
Tracks the document ID in the current context for a source text container for an opened text buffer.
For each entry in this map, there must be a corresponding entry in where the document ID in current context is one of associated document IDs.
Tracks all the associated document IDs for a source text container for an opened text buffer.
True if this workspace supports manually opening and closing documents.
True if this workspace supports manually changing the active context document of a text buffer by calling .
Open the specified document in the host environment.
Close the specified document in the host environment.
Open the specified additional document in the host environment.
Close the specified additional document in the host environment.
Open the specified analyzer config document in the host environment.
Close the specified analyzer config document in the host environment.
Determines if the document is currently open in the host environment.
Gets a list of the currently opened documents.
Gets the ids for documents in the snapshot associated with the given .
Documents are normally associated with a text container when the documents are opened.
Gets the id for the document associated with the given text container in its current context.
Documents are normally associated with a text container when the documents are opened.
Finds the related to the given that
is in the current context. If the is currently closed, then
it is returned directly. If it is open, then this returns the same result that
would return for the
.
Call this method to tell the host environment to change the current active context to this document. Only supported if
returns true.
Call this method when a document has been made the active context in the host environment.
Registers a SourceTextContainer to a source generated document. Unlike , this doesn't result in the workspace
being updated any time the contents of the container is changed; instead this ensures that features going
from the text container to the buffer back to a document get a usable document.
Tries to close the document identified by . This is only needed by
implementations of ILspWorkspace to indicate that the workspace should try to transition to the closed state
for this document, but can bail out gracefully if they don't know about it (for example if they haven't
heard about the file from the project system). Subclasses should determine what file contents they should
transition to if the file is within the workspace.
The DocumentId of the current context document attached to the textContainer, if any.
This method is called during OnSolutionReload. Override this method if you want to manipulate
the reloaded solution.
An event raised whenever the current solution is changed.
An event raised *immediately* whenever the current solution is changed. Handlers
should be written to be very fast. Called on the same thread changing the workspace,
which may vary depending on the workspace.
An event raised whenever the workspace or part of its solution model
fails to access a file or other external resource.
An event that is fired when a is opened in the editor.
An event that is fired when any is opened in the editor.
An event that is fired when a is closed in the editor.
An event that is fired when any is closed in the editor.
An event that is fired when the active context document associated with a buffer
changes.
Gets the workspace associated with the specific text container.
Register a correspondence between a text container and a workspace.
Unregister a correspondence between a text container and a workspace.
Returns a for a given text container.
Used for batching up a lot of events and only combining them into a single request to update generators. The
represents the projects that have changed, and which need their source-generators
re-run. in the list indicates the entire solution has changed and all generators need to
be rerun. The represents if source generators should be fully rerun for the requested
project or solution. If , the existing generator driver will be used, which may result
in no actual changes to emitted source (as the driver may decide no inputs changed, and thus all outputs should
be reused). If , the existing driver will be dropped, forcing all generation to be redone.
The describing any kind of workspace change.
When linked files are edited, one document change event is fired per linked file. All of
these events contain the same , and they all contain the same
. This is so that we can trigger document change events on all
affected documents without reporting intermediate states in which the linked file contents
do not match.
If linked documents are being changed, there may be multiple events with the same
and . Note that the workspace starts with its solution set to an empty solution.
replaces the previous solution, which might be the empty
one.
If linked documents are being changed, there may be multiple events with the same
and . Note replaces the previous
solution with the empty one.
The id of the affected . Can be if this is an change unrelated
to a project (for example . Should be non- for:
The id of the affected . Can be if this is an change unrelated
to a document (for example . Should be non- for:
The current solution changed for an unspecified reason.
A solution was added to the workspace.
The current solution was removed from the workspace.
The current solution was cleared of all projects and documents.
The current solution was reloaded.
A project was added to the current solution.
A project was removed from the current solution.
A project in the current solution was changed.
A project in the current solution was reloaded.
A document was added to the current solution.
A document was removed from the current solution.
A document in the current solution was reloaded.
A document in the current solution was changed.
When linked files are edited, one event is fired per
linked file. All of these events contain the same OldSolution, and they all contain
the same NewSolution. This is so that we can trigger document change events on all
affected documents without reporting intermediate states in which the linked file
contents do not match. Each event does not represent
an incremental update from the previous event in this special case.
An additional document was added to the current solution.
An additional document was removed from the current solution.
An additional document in the current solution was reloaded.
An additional document in the current solution was changed.
The document in the current solution had is info changed; name, folders, filepath
An analyzer config document was added to the current solution.
An analyzer config document was removed from the current solution.
An analyzer config document in the current solution was reloaded.
An analyzer config document in the current solution was changed.
that uses workspace services (i.e. ) to load file content.
Known workspace kinds
kind.
Is this an that the loader considers to be part of the hosting
process. Either part of the compiler itself or the process hosting the compiler.
For a given return the location it was originally added
from. This will return null for any value that was not directly added through the
loader.
The implementation for . This type provides caching and tracking of inputs given
to .
This type generally assumes that files on disk aren't changing, since it ensure that two calls to
will always return the same thing, per that interface's contract.
A given analyzer can have two paths that represent it: the original path of the analyzer passed into this type
and the path returned after calling . In the
places where differentiating between the two is important, the original path will be referred to as the "original" and
the latter is referred to as "resolved".
The original paths are compared ordinally as the expectation is the host needs to handle normalization,
if necessary. That means if the host passes in a.dll and a.DLL the loader will treat them as different
even if the underlying file system is case insensitive.
These are paths generated by the loader, or one of its plugins. They need no normalization and hence
should be compared ordinally.
Simple names are not case sensitive
This is a map between the original full path and how it is represented in this loader. Specifically
the key is the original path before it is considered by .
Access must be guarded by
This is a map between assembly simple names and the collection of original paths that map to them.
Access must be guarded by
Simple names are not case sensitive
Map from resolved paths to the original ones
Access must be guarded by
The paths are compared ordinally here as these are computed values, not user supplied ones, and the
values should refer to the file on disk with no alteration of its path.
Whether or not we're disposed. Once disposed, all functionality on this type should throw.
The implementation needs to load an with the specified from
the specified path.
This method should return an instance or throw.
Determines if the satisfies the request for
. This is partial'd out as each runtime has a different
definition of matching name.
Called from the consumer of to load an analyzer assembly from disk. It
should _not_ be called from the implementation.
Get the path a satellite assembly should be loaded from for the given resolved
analyzer path and culture
This method mimics the .NET lookup rules for satellite assemblies and will return the ideal
resource assembly for the given culture.
Return the best (original, resolved) path information for loading an assembly with the specified .
Return an which does not lock assemblies on disk that is
most appropriate for the current platform.
A shadow copy path will be created on Windows and this value
will be the base directory where shadow copy assemblies are stored.
The base directory for shadow copies. Each instance of
gets its own
subdirectory under this directory. This is also the starting point
for scavenge operations.
As long as this mutex is alive, other instances of this type will not try to clean
up the shadow directory.
This is a counter that is incremented each time a new shadow sub directory is created to ensure they
have unique names.
This is a map from the original directory name to the numbered directory name it
occupies in the shadow directory.
This interface can be called from multiple threads for the same original assembly path. This
is a map between the original path and the Task that completes when the shadow copy for that
original path completes.
This is the number of shadow copies that have occurred in this instance.
This is used for testing, it should not be used for any other purpose.
Get the shadow directory for the given original analyzer file path.
This type has to account for multiple threads calling into the various resolver APIs. To avoid two threads
writing at the same time this method is used to ensure only one thread _wins_ and both can wait for
that thread to complete the copy.
This interface gives the host the ability to control the actaul path used to load an analyzer into the
compiler.
Instances of these types are considered in the order they are added to the .
The first instance to return true from will be considered to
be the owner of that path. From then on only that instance will be called for the other methods on this
interface.
For example in a typical session: the will return true for
analyzer paths under C:\Program Files\dotnet. That means the ,
which appears last on Windows, will never see these paths and hence won't shadow copy them.
Instances of this type will be accessed from multiple threads. All method implementations are expected
to be idempotent.
Is this path handled by this instance?
This method is used to allow compiler hosts to intercept an analyzer path and redirect it to a
a different location.
This will only be called for paths that return true from .
This method is used to allow compiler hosts to intercept an analyzer satellite path and redirect it to a
a different location. A null return here means there is no available satellite assembly for that
culture.
This will only be called for paths that return true from .
This implementation is used to handle analyzers that
exist in global install locations on Windows. These locations do not need to be shadow
copied because they are read-only and are not expected to be updated. Putting this resolver
before shadow copy will let them load in place.
See for an explanation of this constant value.
Realizes the array.
Realizes the array and clears the collection.
Write to slot .
Fills in unallocated slots preceding the , if any.
Realizes the array.
Realizes the array, downcasting each element to a derived type.
Realizes the array and disposes the builder in one operation.
struct enumerator used in foreach.
Generic implementation of object pooling pattern with predefined pool size limit. The main
purpose is that limited number of frequently used objects can be kept in the pool for
further recycling.
Notes:
1) it is not the goal to keep all returned objects. Pool is not meant for storage. If there
is no space in the pool, extra returned objects will be dropped.
2) it is implied that if object was obtained from a pool, the caller will return it back in
a relatively short time. Keeping checked out objects for long durations is ok, but
reduces usefulness of pooling. Just new up your own.
Not returning objects to the pool in not detrimental to the pool's work, but is a bad practice.
Rationale:
If there is no intent for reusing the object, do not use pool - just use "new".
Not using System.Func{T} because this file is linked into the (debugger) Formatter,
which does not have that type (since it compiles against .NET 2.0).
Produces an instance.
Search strategy is a simple linear probing which is chosen for it cache-friendliness.
Note that Free will try to store recycled objects close to the start thus statistically
reducing how far we will typically search.
Returns objects to the pool.
Search strategy is a simple linear probing which is chosen for it cache-friendliness.
Note that Free will try to store recycled objects close to the start thus statistically
reducing how far we will typically search in Allocate.
Removes an object from leak tracking.
This is called when an object is returned to the pool. It may also be explicitly
called if an object allocated from the pool is intentionally not being returned
to the pool. This can be of use with pooled arrays if the consumer wants to
return a larger array to the pool than was originally allocated.
Provides pooled delegate instances to help avoid closure allocations for delegates that require a state argument
with APIs that do not provide appropriate overloads with state arguments.
Gets an delegate, which calls with the specified
. The resulting may be called any number of times
until the returned is disposed.
The following example shows the use of a capturing delegate for a callback action that requires an
argument:
int x = 3;
RunWithActionCallback(() => this.DoSomething(x));
The following example shows the use of a pooled delegate to avoid capturing allocations for the same
callback action:
int x = 3;
using var _ = GetPooledAction(arg => arg.self.DoSomething(arg.x), (self: this, x), out Action action);
RunWithActionCallback(action);
The type of argument to pass to .
The unbound action delegate.
The argument to pass to the unbound action delegate.
A delegate which calls with the specified
.
A disposable which returns the object to the delegate pool.
Gets an delegate, which calls with the specified
. The resulting may be called any number of times
until the returned is disposed.
The following example shows the use of a capturing delegate for a callback action that requires an
argument:
int x = 3;
RunWithActionCallback(a => this.DoSomething(a, x));
The following example shows the use of a pooled delegate to avoid capturing allocations for the same
callback action:
int x = 3;
using var _ = GetPooledAction((a, arg) => arg.self.DoSomething(a, arg.x), (self: this, x), out Action<int> action);
RunWithActionCallback(action);
The type of the first parameter of the bound action.
The type of argument to pass to .
The unbound action delegate.
The argument to pass to the unbound action delegate.
A delegate which calls with the specified
.
A disposable which returns the object to the delegate pool.
Gets an delegate, which calls with the specified
. The resulting may be called any number of times
until the returned is disposed.
The following example shows the use of a capturing delegate for a callback action that requires an
argument:
int x = 3;
RunWithActionCallback((a, b) => this.DoSomething(a, b, x));
The following example shows the use of a pooled delegate to avoid capturing allocations for the same
callback action:
int x = 3;
using var _ = GetPooledAction((a, b, arg) => arg.self.DoSomething(a, b, arg.x), (self: this, x), out Action<int, int> action);
RunWithActionCallback(action);
The type of the first parameter of the bound action.
The type of the second parameter of the bound action.
The type of argument to pass to .
The unbound action delegate.
The argument to pass to the unbound action delegate.
A delegate which calls with the specified
.
A disposable which returns the object to the delegate pool.
Gets an delegate, which calls with the specified
. The resulting may be called any number of times
until the returned is disposed.
The following example shows the use of a capturing delegate for a callback action that requires an
argument:
int x = 3;
RunWithActionCallback((a, b, c) => this.DoSomething(a, b, c, x));
The following example shows the use of a pooled delegate to avoid capturing allocations for the same
callback action:
int x = 3;
using var _ = GetPooledAction((a, b, c, arg) => arg.self.DoSomething(a, b, c, arg.x), (self: this, x), out Action<int, int, int> action);
RunWithActionCallback(action);
The type of the first parameter of the bound action.
The type of the second parameter of the bound action.
The type of the third parameter of the bound action.
The type of argument to pass to .
The unbound action delegate.
The argument to pass to the unbound action delegate.
A delegate which calls with the specified
.
A disposable which returns the object to the delegate pool.
Gets a delegate, which calls with the
specified . The resulting may be called any
number of times until the returned is disposed.
The following example shows the use of a capturing delegate for a predicate that requires an
argument:
int x = 3;
RunWithPredicate(() => this.IsSomething(x));
The following example shows the use of a pooled delegate to avoid capturing allocations for the same
predicate:
int x = 3;
using var _ = GetPooledFunction(arg => arg.self.IsSomething(arg.x), (self: this, x), out Func<bool> predicate);
RunWithPredicate(predicate);
The type of argument to pass to .
The type of the return value of the function.
The unbound function delegate.
The argument to pass to the unbound function delegate.
A delegate which calls with the specified
.
A disposable which returns the object to the delegate pool.
Equivalent to ,
except typed such that it can be used to create a pooled .
Gets a delegate, which calls with the
specified . The resulting may be called any
number of times until the returned is disposed.
The following example shows the use of a capturing delegate for a predicate that requires an
argument:
int x = 3;
RunWithPredicate(a => this.IsSomething(a, x));
The following example shows the use of a pooled delegate to avoid capturing allocations for the same
predicate:
int x = 3;
using var _ = GetPooledFunction((a, arg) => arg.self.IsSomething(a, arg.x), (self: this, x), out Func<int, bool> predicate);
RunWithPredicate(predicate);
The type of the first parameter of the bound function.
The type of argument to pass to .
The type of the return value of the function.
The unbound function delegate.
The argument to pass to the unbound function delegate.
A delegate which calls with the specified
.
A disposable which returns the object to the delegate pool.
Gets a delegate, which calls with the
specified . The resulting may be called any
number of times until the returned is disposed.
The following example shows the use of a capturing delegate for a predicate that requires an
argument:
int x = 3;
RunWithPredicate((a, b) => this.IsSomething(a, b, x));
The following example shows the use of a pooled delegate to avoid capturing allocations for the same
predicate:
int x = 3;
using var _ = GetPooledFunction((a, b, arg) => arg.self.IsSomething(a, b, arg.x), (self: this, x), out Func<int, int, bool> predicate);
RunWithPredicate(predicate);
The type of the first parameter of the bound function.
The type of the second parameter of the bound function.
The type of argument to pass to .
The type of the return value of the function.
The unbound function delegate.
The argument to pass to the unbound function delegate.
A delegate which calls with the specified
.
A disposable which returns the object to the delegate pool.
Gets a delegate, which calls with the
specified . The resulting may be called any
number of times until the returned is disposed.
The following example shows the use of a capturing delegate for a predicate that requires an
argument:
int x = 3;
RunWithPredicate((a, b, c) => this.IsSomething(a, b, c, x));
The following example shows the use of a pooled delegate to avoid capturing allocations for the same
predicate:
int x = 3;
using var _ = GetPooledFunction((a, b, c, arg) => arg.self.IsSomething(a, b, c, arg.x), (self: this, x), out Func<int, int, int, bool> predicate);
RunWithPredicate(predicate);
The type of the first parameter of the bound function.
The type of the second parameter of the bound function.
The type of the third parameter of the bound function.
The type of argument to pass to .
The type of the return value of the function.
The unbound function delegate.
The argument to pass to the unbound function delegate.
A delegate which calls with the specified
.
A disposable which returns the object to the delegate pool.
A releaser for a pooled delegate.
This type is intended for use as the resource of a using statement. When used in this manner,
should not be called explicitly.
If used without a using statement, calling is optional. If the call is
omitted, the object will not be returned to the pool. The behavior of this type if is
called multiple times is undefined.
The usage is:
var inst = PooledStringBuilder.GetInstance();
var sb = inst.builder;
... Do Stuff...
... sb.ToString() ...
inst.Free();
If someone need to create a private pool
The size of the pool.
Defines diagnostic info for Roslyn experimental APIs.
Very cheap trivial comparer that never matches the keys,
should only be used in empty dictionaries.
Verify nodes match source.
Return the index of the first difference between
the two strings, or -1 if the strings are the same.
Returns true if the provided position is in a hidden region inaccessible to the user.
The collection of extension methods for the type
Converts a sequence to an immutable array.
Elemental type of the sequence.
The sequence to convert.
An immutable copy of the contents of the sequence.
If items is null (default)
If the sequence is null, this will throw
Converts a sequence to an immutable array.
Elemental type of the sequence.
The sequence to convert.
An immutable copy of the contents of the sequence.
If the sequence is null, this will return an empty array.
Converts a sequence to an immutable array.
Elemental type of the sequence.
The sequence to convert.
An immutable copy of the contents of the sequence.
If the sequence is null, this will return the default (null) array.
Converts an array to an immutable array. The array must not be null.
The sequence to convert
Converts a array to an immutable array.
The sequence to convert
If the sequence is null, this will return the default (null) array.
Converts an array to an immutable array.
The sequence to convert
If the array is null, this will return an empty immutable array.
Reads bytes from specified .
The stream.
Read-only content of the stream.
Maps an immutable array to another immutable array.
The array to map
The mapping delegate
If the items's length is 0, this will return an empty immutable array
Maps an immutable array to another immutable array.
The sequence to map
The mapping delegate
The extra input used by mapping delegate
If the items's length is 0, this will return an empty immutable array.
Maps an immutable array to another immutable array.
The sequence to map
The mapping delegate
The extra input used by mapping delegate
If the items's length is 0, this will return an empty immutable array.
Maps a subset of immutable array to another immutable array.
Type of the source array items
Type of the transformed array items
The array to transform
The condition to use for filtering the array content.
A transform function to apply to each element that is not filtered out by .
If the items's length is 0, this will return an empty immutable array.
Maps a subset of immutable array to another immutable array.
Type of the source array items
Type of the transformed array items
Type of the extra argument
The array to transform
The condition to use for filtering the array content.
A transform function to apply to each element that is not filtered out by .
The extra input used by and .
If the items's length is 0, this will return an empty immutable array.
Maps and flattens a subset of immutable array to another immutable array.
Type of the source array items
Type of the transformed array items
The array to transform
A transform function to apply to each element.
If the array's length is 0, this will return an empty immutable array.
Maps and flattens a subset of immutable array to another immutable array.
Type of the source array items
Type of the transformed array items
The array to transform
A transform function to apply to each element.
If the array's length is 0, this will return an empty immutable array.
Maps and flattens a subset of immutable array to another immutable array.
Type of the source array items
Type of the transformed array items
The array to transform
A transform function to apply to each element.
If the items's length is 0, this will return an empty immutable array.
Maps and flattens a subset of immutable array to another immutable array.
Type of the source array items
Type of the transformed array items
The array to transform
The condition to use for filtering the array content.
A transform function to apply to each element that is not filtered out by .
If the items's length is 0, this will return an empty immutable array.
Maps and flattens a subset of immutable array to another immutable array.
Type of the source array items
Type of the transformed array items
The array to transform
The condition to use for filtering the array content.
A transform function to apply to each element that is not filtered out by .
If the items's length is 0, this will return an empty immutable array.
Maps and flattens a subset of immutable array to another immutable array.
Type of the source array items
Type of the transformed array items
The array to transform
The condition to use for filtering the array content.
A transform function to apply to each element that is not filtered out by .
If the items's length is 0, this will return an empty immutable array.
Maps and flattens a subset of immutable array to another immutable array.
Type of the source array items
Type of the argument to pass to the predicate and selector
Type of the transformed array items
The array to transform
The condition to use for filtering the array content.
A transform function to apply to each element that is not filtered out by .
If the items's length is 0, this will return an empty immutable array.
Maps an immutable array through a function that returns ValueTasks, returning the new ImmutableArray.
Maps an immutable array through a function that returns ValueTasks, returning the new ImmutableArray.
Zips two immutable arrays together through a mapping function, producing another immutable array.
If the items's length is 0, this will return an empty immutable array.
Creates a new immutable array based on filtered elements by the predicate. The array must not be null.
The array to process
The delegate that defines the conditions of the element to search for.
Creates a new immutable array based on filtered elements by the predicate. The array must not be null.
The array to process
The delegate that defines the conditions of the element to search for.
Casts the immutable array of a Type to an immutable array of its base type.
Determines whether this instance and another immutable array are equal.
The comparer to determine if the two arrays are equal.
True if the two arrays are equal
Returns an empty array if the input array is null (default)
Returns an empty array if the input nullable value type is null or the underlying array is null (default)
Returns an array of distinct elements, preserving the order in the original array.
If the array has no duplicates, the original array is returned. The original array must not be null.
Determines whether duplicates exist using default equality comparer.
Array to search for duplicates
Whether duplicates were found
Determines whether duplicates exist using . Use other override
if you don't need a custom comparer.
Array to search for duplicates
Comparer to use in search
Whether duplicates were found
Create BitArray with at least the specified number of bits.
return a bit array with all bits set from index 0 through bitCount-1
Make a copy of a bit array.
Invert all the bits in the vector.
Is the given bit array null?
Modify this bit vector by bitwise AND-ing each element with the other bit vector.
For the purposes of the intersection, any bits beyond the current length will be treated as zeroes.
Return true if any changes were made to the bits of this bit vector.
Modify this bit vector by '|'ing each element with the other bit vector.
True if any bits were set as a result of the union.
A MultiDictionary that allows only adding, and preserves the order of values added to the
dictionary. Thread-safe for reading, but not for adding.
Always uses the default comparer.
Add a value to the dictionary.
Get all values associated with K, in the order they were added.
Returns empty read-only array if no values were present.
Get a collection of all the keys.
Each value is either a single V or an .
Never null.
Provides methods for creating a segmented dictionary that is immutable; meaning it cannot be changed once it is
created.
Represents a segmented dictionary that is immutable; meaning it cannot be changed once it is created.
There are different scenarios best for and others
best for .
In general, is applicable in scenarios most like
the scenarios where is applicable, and
is applicable in scenarios most like the scenarios where
is applicable.
The following table summarizes the performance characteristics of
:
-
Operation
Complexity
Complexity
Comments
-
Item
O(1)
O(log n)
Directly index into the underlying segmented dictionary
-
Add()
O(n)
O(log n)
Requires creating a new segmented dictionary
This type is backed by segmented arrays to avoid using the Large Object Heap without impacting algorithmic
complexity.
The type of the keys in the dictionary.
The type of the values in the dictionary.
This type has a documented contract of being exactly one reference-type field in size. Our own
class depends on it, as well as others externally.
IMPORTANT NOTICE FOR MAINTAINERS AND REVIEWERS:
This type should be thread-safe. As a struct, it cannot protect its own fields from being changed from one
thread while its members are executing on other threads because structs can change in place simply by
reassigning the field containing this struct. Therefore it is extremely important that âš âš Every member
should only dereference this ONCE âš âš . If a member needs to reference the
field, that counts as a dereference of this. Calling other instance members
(properties or methods) also counts as dereferencing this. Any member that needs to use this more
than once must instead assign this to a local variable and use that for the rest of the code instead.
This effectively copies the one field in the struct to a local variable so that it is insulated from other
threads.
Private helper class for use only by .
The private builder implementation.
The return value from the implementation of is
. This is the return value for most instances of this
enumerator.
The return value from the implementation of is
. This is the return value for instances of this
enumerator created by the implementation in
.
Private helper class for use only by and
.
The immutable collection this builder is based on.
The current mutable collection this builder is operating on. This field is initialized to a copy of
the first time a change is made.
Represents a segmented hash set that is immutable; meaning it cannot be changed once it is created.
There are different scenarios best for and others
best for .
The following table summarizes the performance characteristics of
:
-
Operation
Complexity
Complexity
Comments
-
Contains
O(1)
O(log n)
Directly index into the underlying segmented list
-
Add()
O(n)
O(log n)
Requires creating a new segmented hash set and cloning all impacted segments
This type is backed by segmented arrays to avoid using the Large Object Heap without impacting algorithmic
complexity.
The type of the value in the set.
This type has a documented contract of being exactly one reference-type field in size. Our own
class depends on it, as well as others externally.
IMPORTANT NOTICE FOR MAINTAINERS AND REVIEWERS:
This type should be thread-safe. As a struct, it cannot protect its own fields from being changed from one
thread while its members are executing on other threads because structs can change in place simply by
reassigning the field containing this struct. Therefore it is extremely important that âš âš Every member
should only dereference this ONCE âš âš . If a member needs to reference the
field, that counts as a dereference of this. Calling other instance members
(properties or methods) also counts as dereferencing this. Any member that needs to use this more
than once must instead assign this to a local variable and use that for the rest of the code instead.
This effectively copies the one field in the struct to a local variable so that it is insulated from other
threads.
The private builder implementation.
Private helper class for use only by and
.
The immutable collection this builder is based on.
The current mutable collection this builder is operating on. This field is initialized to a copy of
the first time a change is made.
Represents a segmented list that is immutable; meaning it cannot be changed once it is created.
There are different scenarios best for and others
best for .
The following table summarizes the performance characteristics of
:
-
Operation
Complexity
Complexity
Comments
-
Item
O(1)
O(log n)
Directly index into the underlying segmented list
-
Add()
Currently O(n), but could be O(1) with a relatively large constant
O(log n)
Currently requires creating a new segmented list, but could be modified to only clone the segments with changes
-
Insert()
O(n)
O(log n)
Requires creating a new segmented list and cloning all impacted segments
This type is backed by segmented arrays to avoid using the Large Object Heap without impacting algorithmic
complexity.
The type of the value in the list.
This type has a documented contract of being exactly one reference-type field in size. Our own
class depends on it, as well as others externally.
IMPORTANT NOTICE FOR MAINTAINERS AND REVIEWERS:
This type should be thread-safe. As a struct, it cannot protect its own fields from being changed from one
thread while its members are executing on other threads because structs can change in place simply by
reassigning the field containing this struct. Therefore it is extremely important that âš âš Every member
should only dereference this ONCE âš âš . If a member needs to reference the
field, that counts as a dereference of this. Calling other instance members
(properties or methods) also counts as dereferencing this. Any member that needs to use this more
than once must instead assign this to a local variable and use that for the rest of the code instead.
This effectively copies the one field in the struct to a local variable so that it is insulated from other
threads.
The private builder implementation.
Private helper class for use only by and
.
The immutable collection this builder is based on.
The current mutable collection this builder is operating on. This field is initialized to a copy of
the first time a change is made.
Swaps the values in the two references if the first is greater than the second.
Swaps the values in the two references, regardless of whether the two references are the same.
Helper methods for use in array/span sorting routines.
Returns the integer (floor) log of the specified value, base 2.
Note that by convention, input value 0 returns 0 since Log(0) is undefined.
Does not directly use any hardware intrinsics, nor does it incur branching.
The value.
How many ints must be allocated to represent n bits. Returns (n+31)/32, but avoids overflow.
Returns approximate reciprocal of the divisor: ceil(2**64 / divisor).
This should only be used on 64-bit.
Performs a mod operation using the multiplier pre-computed with .
PERF: This improves performance in 64-bit scenarios at the expense of performance in 32-bit scenarios. Since
we only build a single AnyCPU binary, we opt for improved performance in the 64-bit scenario.
Provides static methods to invoke members on value types that explicitly implement the
member.
Normally, invocation of explicit interface members requires boxing or copying the value type, which is
especially problematic for operations that mutate the value. Invocation through these helpers behaves like a
normal call to an implicitly implemented member.
Provides static methods to invoke members on value types that explicitly implement
the member.
Normally, invocation of explicit interface members requires boxing or copying the value type, which is
especially problematic for operations that mutate the value. Invocation through these helpers behaves like a
normal call to an implicitly implemented member.
Provides static methods to invoke members on value types that explicitly implement the
member.
Normally, invocation of explicit interface members requires boxing or copying the value type, which is
especially problematic for operations that mutate the value. Invocation through these helpers behaves like a
normal call to an implicitly implemented member.
Provides static methods to invoke members on value types that explicitly implement the
member.
Normally, invocation of explicit interface members requires boxing or copying the value type, which is
especially problematic for operations that mutate the value. Invocation through these helpers behaves like a
normal call to an implicitly implemented member.
Provides static methods to invoke members on value types that explicitly implement
the member.
Normally, invocation of explicit interface members requires boxing or copying the value type, which is
especially problematic for operations that mutate the value. Invocation through these helpers behaves like a
normal call to an implicitly implemented member.
Provides static methods to invoke members on value types that explicitly implement the
member.
Normally, invocation of explicit interface members requires boxing or copying the value type, which is
especially problematic for operations that mutate the value. Invocation through these helpers behaves like a
normal call to an implicitly implemented member.
Used internally to control behavior of insertion into a or .
The default insertion behavior.
Specifies that an existing entry with the same key should be overwritten if encountered.
Specifies that if an existing entry with the same key is encountered, an exception should be thrown.
Returns a by-ref to type that is a null reference.
Returns if a given by-ref to type is a null reference.
This check is conceptually similar to (void*)(&source) == nullptr.
Calculates the maximum number of elements of size which can fit into an array
which has the following characteristics:
- The array can be allocated in the small object heap.
- The array length is a power of 2.
The size of the elements in the array.
The segment size to use for small object heap segmented arrays.
Calculates a shift which can be applied to an absolute index to get the page index within a segmented array.
The number of elements in each page of the segmented array. Must be a power of 2.
The shift to apply to the absolute index to get the page index within a segmented array.
Calculates a mask, which can be applied to an absolute index to get the index within a page of a segmented
array.
The number of elements in each page of the segmented array. Must be a power of 2.
The bit mask to obtain the index within a page from an absolute index within a segmented array.
Equality comparer for hashsets of hashsets
Destination array is not long enough to copy all the items in the collection. Check array index and length.
Hashtable's capacity overflowed and went negative. Check load factor, capacity and the current size of the table.
The given key '{0}' was not present in the dictionary.
Destination array was not long enough. Check the destination index, length, and the array's lower bounds.
Source array was not long enough. Check the source index, length, and the array's lower bounds.
The lower bound of target array must be zero.
Only single dimensional arrays are supported for the requested action.
The value "{0}" is not of type "{1}" and cannot be used in this generic collection.
An item with the same key has already been added. Key: {0}
Target array type is not compatible with the type of items in the collection.
Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
Number was less than the array's lower bound in the first dimension.
Larger than collection size.
Count must be positive and count must refer to a location within the string/array/collection.
Index was out of range. Must be non-negative and less than the size of the collection.
Index must be within the bounds of the List.
Non-negative number required.
capacity was less than the current size.
Operations that change non-concurrent collections must have exclusive access. A concurrent update was performed on this collection and corrupted its state. The collection's state is no longer correct.
Collection was modified; enumeration operation may not execute.
Enumeration has either not started or has already finished.
Failed to compare two elements in the array.
Mutating a key collection derived from a dictionary is not allowed.
Mutating a value collection derived from a dictionary is not allowed.
The specified arrays must have the same number of dimensions.
Collection was of a fixed size.
Object is not a array with the same number of elements as the array to compare it to.
Unable to sort because the IComparer.Compare() method returns inconsistent results. Either a value does not compare equal to itself, or one value repeatedly compared to another value yields different results. IComparer: '{0}'.
Cannot find the old value
Index was out of range. Must be non-negative and less than or equal to the size of the collection.
Mutates a value in-place with optimistic locking transaction semantics via a specified transformation
function. The transformation is retried as many times as necessary to win the optimistic locking race.
The type of value stored by the list.
The variable or field to be changed, which may be accessed by multiple threads.
A function that mutates the value. This function should be side-effect free,
as it may run multiple times when races occur with other threads.
if the location's value is changed by applying the result of the
function; otherwise, if the location's value remained
the same because the last invocation of returned the existing value.
Mutates a value in-place with optimistic locking transaction semantics via a specified transformation
function. The transformation is retried as many times as necessary to win the optimistic locking race.
The type of value stored by the list.
The type of argument passed to the .
The variable or field to be changed, which may be accessed by multiple threads.
A function that mutates the value. This function should be side-effect free, as it may run multiple times
when races occur with other threads.
The argument to pass to .
if the location's value is changed by applying the result of the
function; otherwise, if the location's value remained
the same because the last invocation of returned the existing value.
Assigns a field or variable containing an immutable list to the specified value and returns the previous
value.
The type of value stored by the list.
The field or local variable to change.
The new value to assign.
The prior value at the specified .
Assigns a field or variable containing an immutable list to the specified value if it is currently equal to
another specified value. Returns the previous value.
The type of value stored by the list.
The field or local variable to change.
The new value to assign.
The value to check equality for before assigning.
The prior value at the specified .
Assigns a field or variable containing an immutable list to the specified value if it is has not yet been
initialized.
The type of value stored by the list.
The field or local variable to change.
The new value to assign.
if the field was assigned the specified value; otherwise,
if it was previously initialized.
Mutates a value in-place with optimistic locking transaction semantics via a specified transformation
function. The transformation is retried as many times as necessary to win the optimistic locking race.
The type of value stored by the set.
The variable or field to be changed, which may be accessed by multiple threads.
A function that mutates the value. This function should be side-effect free,
as it may run multiple times when races occur with other threads.
if the location's value is changed by applying the result of the
function; otherwise, if the location's value remained
the same because the last invocation of returned the existing value.
Mutates a value in-place with optimistic locking transaction semantics via a specified transformation
function. The transformation is retried as many times as necessary to win the optimistic locking race.
The type of value stored by the set.
The type of argument passed to the .
The variable or field to be changed, which may be accessed by multiple threads.
A function that mutates the value. This function should be side-effect free, as it may run multiple times
when races occur with other threads.
The argument to pass to .
if the location's value is changed by applying the result of the
function; otherwise, if the location's value remained
the same because the last invocation of returned the existing value.
Assigns a field or variable containing an immutable set to the specified value and returns the
previous value.
The type of value stored by the set.
The field or local variable to change.
The new value to assign.
The prior value at the specified .
Assigns a field or variable containing an immutable set to the specified value if it is currently
equal to another specified value. Returns the previous value.
The type of value stored by the set.
The field or local variable to change.
The new value to assign.
The value to check equality for before assigning.
The prior value at the specified .
Assigns a field or variable containing an immutable set to the specified value if it is has not yet
been initialized.
The type of value stored by the set.
The field or local variable to change.
The new value to assign.
if the field was assigned the specified value; otherwise,
if it was previously initialized.
Mutates a value in-place with optimistic locking transaction semantics via a specified transformation
function. The transformation is retried as many times as necessary to win the optimistic locking race.
The type of key stored by the dictionary.
The type of value stored by the dictionary.
The variable or field to be changed, which may be accessed by multiple threads.
A function that mutates the value. This function should be side-effect free,
as it may run multiple times when races occur with other threads.
if the location's value is changed by applying the result of the
function; otherwise, if the location's value remained
the same because the last invocation of returned the existing value.
Mutates a value in-place with optimistic locking transaction semantics via a specified transformation
function. The transformation is retried as many times as necessary to win the optimistic locking race.
The type of key stored by the dictionary.
The type of value stored by the dictionary.
The type of argument passed to the .
The variable or field to be changed, which may be accessed by multiple threads.
A function that mutates the value. This function should be side-effect free, as it may run multiple times
when races occur with other threads.
The argument to pass to .
if the location's value is changed by applying the result of the
function; otherwise, if the location's value remained
the same because the last invocation of returned the existing value.
Assigns a field or variable containing an immutable dictionary to the specified value and returns the
previous value.
The type of key stored by the dictionary.
The type of value stored by the dictionary.
The field or local variable to change.
The new value to assign.
The prior value at the specified .
Assigns a field or variable containing an immutable dictionary to the specified value if it is currently
equal to another specified value. Returns the previous value.
The type of key stored by the dictionary.
The type of value stored by the dictionary.
The field or local variable to change.
The new value to assign.
The value to check equality for before assigning.
The prior value at the specified .
Assigns a field or variable containing an immutable dictionary to the specified value if it is has not yet
been initialized.
The type of key stored by the dictionary.
The type of value stored by the dictionary.
The field or local variable to change.
The new value to assign.
if the field was assigned the specified value; otherwise,
if it was previously initialized.
Reads from an ImmutableArray location, ensuring that a read barrier is inserted to prevent any subsequent reads from being reordered before this read.
This method is not intended to be used to provide write barriers.
Writes to an ImmutableArray location, ensuring that a write barrier is inserted to prevent any prior writes from being reordered after this write.
This method is not intended to be used to provide read barriers.
Defines a fixed-size collection with the same API surface and behavior as an "SZArray", which is a
single-dimensional zero-based array commonly represented in C# as T[]. The implementation of this
collection uses segmented arrays to avoid placing objects on the Large Object Heap.
The type of elements stored in the array.
Private helper class for use only by .
The number of elements in each page of the segmented array of type .
The segment size is calculated according to , performs the IL operation
defined by . ECMA-335 defines this operation with the following note:
sizeof returns the total size that would be occupied by each element in an array of this type –
including any padding the implementation chooses to add. Specifically, array elements lie sizeof
bytes apart.
The bit shift to apply to an array index to get the page index within .
The bit mask to apply to an array index to get the index within a page of .
An unsafe class that provides a set of methods to access the underlying data representations of immutable segmented
collections.
Gets the backing storage array for a .
The type of elements stored in the array.
The segmented array.
The backing storage array for the segmented array. Note that replacing segments within the returned
value will invalidate the data structure.
Gets a value wrapping the input T[][].
The type of elements in the input.
The combined length of the input arrays
The input array to wrap in the returned value.
A value wrapping .
When using this method, callers should take extra care to ensure that they're the sole owners of the input
array, and that it won't be modified once the returned value starts
being used. Doing so might cause undefined behavior in code paths which don't expect the contents of a given
values to change outside their control.
Thrown when is
Gets either a ref to a in the or a
ref null if it does not exist in the .
The dictionary to get the ref to from.
The key used for lookup.
The type of the keys in the dictionary.
The type of the values in the dictionary.
Items should not be added or removed from the while the ref
is in use. The ref null can be detected using .
Gets either a read-only ref to a in the
or a ref null if it does not exist in the .
The dictionary to get the ref to from.
The key used for lookup.
The type of the keys in the dictionary.
The type of the values in the dictionary.
The ref null can be detected using .
Gets either a ref to a in the
or a ref null if it does not exist in the .
The dictionary to get the ref to from.
The key used for lookup.
The type of the keys in the dictionary.
The type of the values in the dictionary.
Items should not be added or removed from the
while the ref is in use. The ref null can be detected using
.
Gets an value wrapping the input .
The type of elements in the input segmented list.
The input segmented list to wrap in the returned value.
An value wrapping .
When using this method, callers should take extra care to ensure that they're the sole owners of the input
list, and that it won't be modified once the returned value starts
being used. Doing so might cause undefined behavior in code paths which don't expect the contents of a given
values to change after its creation.
If is , the returned value
will be uninitialized (i.e. its property will be
).
Gets the underlying for an input value.
The type of elements in the input value.
The input value to get the underlying from.
The underlying for , if present; otherwise, .
When using this method, callers should make sure to not pass the resulting underlying list to methods that
might mutate it. Doing so might cause undefined behavior in code paths using which
don't expect the contents of the value to change.
If is uninitialized (i.e. its property is
), the resulting will be .
Gets an value wrapping the input .
The type of elements in the input segmented hash set.
The input segmented hash set to wrap in the returned value.
An value wrapping .
When using this method, callers should take extra care to ensure that they're the sole owners of the input
set, and that it won't be modified once the returned value starts
being used. Doing so might cause undefined behavior in code paths which don't expect the contents of a given
values to change after its creation.
If is , the returned
value will be uninitialized (i.e. its property will be
).
Gets the underlying for an input value.
The type of elements in the input value.
The input value to get the underlying from.
The underlying for , if present; otherwise, .
When using this method, callers should make sure to not pass the resulting underlying hash set to methods that
might mutate it. Doing so might cause undefined behavior in code paths using which
don't expect the contents of the value to change.
If is uninitialized (i.e. its
property is ), the resulting will be .
Gets an value wrapping the input .
The type of keys in the input segmented dictionary.
The type of values in the input segmented dictionary.
The input segmented dictionary to wrap in the returned value.
An value wrapping .
When using this method, callers should take extra care to ensure that they're the sole owners of the input
dictionary, and that it won't be modified once the returned
value starts being used. Doing so might cause undefined behavior in code paths which don't expect the contents
of a given values to change after its creation.
If is , the returned
value will be uninitialized (i.e. its
property will be ).
Gets the underlying for an input value.
The type of keys in the input value.
The type of values in the input value.
The input value to get the underlying from.
The underlying for , if present; otherwise, .
When using this method, callers should make sure to not pass the resulting underlying dictionary to methods that
might mutate it. Doing so might cause undefined behavior in code paths using which
don't expect the contents of the value to change.
If is uninitialized (i.e. its
property is ), the resulting will be .
Represents a collection of keys and values.
This collection has the same performance characteristics as , but
uses segmented arrays to avoid allocations in the Large Object Heap.
The type of the keys in the dictionary.
The type of the values in the dictionary.
Private helper class for use only by .
doesn't devirtualize on .NET Framework, so we always ensure
is initialized to a non- value.
Ensures that the dictionary can hold up to 'capacity' entries without any further expansion of its backing storage
Sets the capacity of this dictionary to what it would be if it had been originally initialized with all its entries
This method can be used to minimize the memory overhead
once it is known that no new elements will be added.
To allocate minimum size storage array, execute the following statements:
dictionary.Clear();
dictionary.TrimExcess();
Sets the capacity of this dictionary to hold up 'capacity' entries without any further expansion of its backing storage
This method can be used to minimize the memory overhead
once it is known that no new elements will be added.
0-based index of next entry in chain: -1 means end of chain
also encodes whether this entry _itself_ is part of the free list by changing sign and subtracting 3,
so -2 means end of free list, -3 means index 0 but on free list, -4 means index 1 but on free list, etc.
Cutoff point for stackallocs. This corresponds to the number of ints.
When constructing a hashset from an existing collection, it may contain duplicates,
so this is used as the max acceptable excess ratio of capacity to count. Note that
this is only used on the ctor and not to automatically shrink if the hashset has, e.g,
a lot of adds followed by removes. Users must explicitly shrink by calling TrimExcess.
This is set to 3 because capacity is acceptable as 2x rounded up to nearest prime.
doesn't devirtualize on .NET Framework, so we always ensure
is initialized to a non- value.
Initializes the SegmentedHashSet from another SegmentedHashSet with the same element type and equality comparer.
Removes all elements from the object.
Determines whether the contains the specified element.
The element to locate in the object.
true if the object contains the specified element; otherwise, false.
Gets the index of the item in , or -1 if it's not in the set.
Gets a reference to the specified hashcode's bucket, containing an index into .
Gets the number of elements that are contained in the set.
Adds the specified element to the .
The element to add to the set.
true if the element is added to the object; false if the element is already present.
Searches the set for a given value and returns the equal value it finds, if any.
The value to search for.
The value from the set that the search found, or the default value of when the search yielded no match.
A value indicating whether the search was successful.
This can be useful when you want to reuse a previously stored reference instead of
a newly constructed one (so that more sharing of references can occur) or to look up
a value that has more complete data than the value you currently have, although their
comparer functions indicate they are equal.
Modifies the current object to contain all elements that are present in itself, the specified collection, or both.
The collection to compare to the current object.
Modifies the current object to contain only elements that are present in that object and in the specified collection.
The collection to compare to the current object.
Removes all elements in the specified collection from the current object.
The collection to compare to the current object.
Modifies the current object to contain only elements that are present either in that object or in the specified collection, but not both.
The collection to compare to the current object.
Determines whether a object is a subset of the specified collection.
The collection to compare to the current object.
true if the object is a subset of ; otherwise, false.
Determines whether a object is a proper subset of the specified collection.
The collection to compare to the current object.
true if the object is a proper subset of ; otherwise, false.
Determines whether a object is a proper superset of the specified collection.
The collection to compare to the current object.
true if the object is a superset of ; otherwise, false.
Determines whether a object is a proper superset of the specified collection.
The collection to compare to the current object.
true if the object is a proper superset of ; otherwise, false.
Determines whether the current object and a specified collection share common elements.
The collection to compare to the current object.
true if the object and share at least one common element; otherwise, false.
Determines whether a object and the specified collection contain the same elements.
The collection to compare to the current object.
true if the object is equal to ; otherwise, false.
Copies the elements of a object to an array, starting at the specified array index.
The destination array.
The zero-based index in array at which copying begins.
Removes all elements that match the conditions defined by the specified predicate from a collection.
Gets the object that is used to determine equality for the values in the set.
Ensures that this hash set can hold the specified number of elements without growing.
Sets the capacity of a object to the actual number of elements it contains,
rounded up to a nearby, implementation-specific value.
Returns an object that can be used for equality testing of a object.
Initializes buckets and slots arrays. Uses suggested capacity by finding next prime
greater than or equal to capacity.
Adds the specified element to the set if it's not already contained.
The element to add to the set.
The index into of the element.
true if the element is added to the object; false if the element is already present.
Implementation Notes:
If other is a hashset and is using same equality comparer, then checking subset is
faster. Simply check that each element in this is in other.
Note: if other doesn't use same equality comparer, then Contains check is invalid,
which is why callers must take are of this.
If callers are concerned about whether this is a proper subset, they take care of that.
If other is a hashset that uses same equality comparer, intersect is much faster
because we can use other's Contains
Iterate over other. If contained in this, mark an element in bit array corresponding to
its position in _slots. If anything is unmarked (in bit array), remove it.
This attempts to allocate on the stack, if below StackAllocThreshold.
if other is a set, we can assume it doesn't have duplicate elements, so use this
technique: if can't remove, then it wasn't present in this set, so add.
As with other methods, callers take care of ensuring that other is a hashset using the
same equality comparer.
Implementation notes:
Used for symmetric except when other isn't a SegmentedHashSet. This is more tedious because
other may contain duplicates. SegmentedHashSet technique could fail in these situations:
1. Other has a duplicate that's not in this: SegmentedHashSet technique would add then
remove it.
2. Other has a duplicate that's in this: SegmentedHashSet technique would remove then add it
back.
In general, its presence would be toggled each time it appears in other.
This technique uses bit marking to indicate whether to add/remove the item. If already
present in collection, it will get marked for deletion. If added from other, it will
get marked as something not to remove.
Determines counts that can be used to determine equality, subset, and superset. This
is only used when other is an IEnumerable and not a SegmentedHashSet. If other is a SegmentedHashSet
these properties can be checked faster without use of marking because we can assume
other has no duplicates.
The following count checks are performed by callers:
1. Equals: checks if unfoundCount = 0 and uniqueFoundCount = _count; i.e. everything
in other is in this and everything in this is in other
2. Subset: checks if unfoundCount >= 0 and uniqueFoundCount = _count; i.e. other may
have elements not in this and everything in this is in other
3. Proper subset: checks if unfoundCount > 0 and uniqueFoundCount = _count; i.e
other must have at least one element not in this and everything in this is in other
4. Proper superset: checks if unfound count = 0 and uniqueFoundCount strictly less
than _count; i.e. everything in other was in this and this had at least one element
not contained in other.
An earlier implementation used delegates to perform these checks rather than returning
an ElementCount struct; however this was changed due to the perf overhead of delegates.
Allows us to finish faster for equals and proper superset
because unfoundCount must be 0.
Checks if equality comparers are equal. This is used for algorithms that can
speed up if it knows the other item has unique elements. I.e. if they're using
different equality comparers, then uniqueness assumption between sets break.
0-based index of next entry in chain: -1 means end of chain
also encodes whether this entry _itself_ is part of the free list by changing sign and subtracting 3,
so -2 means end of free list, -3 means index 0 but on free list, -4 means index 1 but on free list, etc.
Represents a strongly typed list of objects that can be accessed by index. Provides methods to search, sort, and
manipulate lists.
This collection has the same performance characteristics as , but uses segmented
arrays to avoid allocations in the Large Object Heap.
The type of elements in the list.
Ensures that the capacity of this list is at least the specified .
If the current capacity of the list is less than specified ,
the capacity is increased by continuously twice current capacity until it is at least the specified .
The minimum capacity to ensure.
The new capacity of this list.
Increase the capacity of this list to at least the specified .
The minimum capacity to ensure.
Creates a shallow copy of a range of elements in the source .
The zero-based index at which the range starts.
The length of the range.
A shallow copy of a range of elements in the source .
is less than 0.
-or-
is less than 0.
and do not denote a valid range of elements in the .
Checks if a type is considered a "built-in integral" by CLR.
Checks if a type is a primitive of a fixed size.
These special types are structs that contain fields of the same type
(e.g. System.Int32 contains a field of type System.Int32).
Checks if a type is considered a "built-in integral" by CLR.
For signed integer types return number of bits for their representation minus 1.
I.e. 7 for Int8, 31 for Int32, etc.
Used for checking loop end condition for VB for loop.
Tells whether a different code path can be taken based on the fact, that a given type is a special type.
This method is called in places where conditions like specialType != SpecialType.None were previously used.
The main reason for this method to exist is to prevent such conditions, which introduce silent code changes every time a new special type is added.
This doesn't mean the checked special type range of this method cannot be modified,
but rather that each usage of this method needs to be reviewed to make sure everything works as expected in such cases
Convert a boxed primitive (generally of the backing type of an enum) into a ulong.
Maps an array builder to immutable array.
The array to map
The mapping delegate
If the items's length is 0, this will return an empty immutable array
Maps an array builder to immutable array.
The sequence to map
The mapping delegate
The extra input used by mapping delegate
If the items's length is 0, this will return an empty immutable array.
Maps an array builder to immutable array.
The sequence to map
The mapping delegate
The extra input used by mapping delegate
If the items's length is 0, this will return an empty immutable array.
The collection of extension methods for the type
If the given key is not found in the dictionary, add it with the given value and return the value.
Otherwise return the existing value associated with that key.
If the given key is not found in the dictionary, add it with the result of invoking getValue and return the value.
Otherwise return the existing value associated with that key.
Converts the passed in dictionary to an , where all
the values in the passed builder will be converted to an using . The will be freed at the end of
this method as well, and should not be used afterwards.
Initializes a new instance of the class.
An ordered set of fully qualified
paths which are searched when resolving assembly names.
Directory used when resolving relative paths.
A pre-created delegate to assign to if needed.
Dumps the stack trace of the exception and the handler to the console. This is useful
for debugging unit tests that hit a fatal exception
Checks for the given ; if the is true,
immediately terminates the process without running any pending finally blocks or finalizers
and causes a crash dump to be collected (if the system is configured to do so).
Otherwise, the process continues normally.
The conditional expression to evaluate.
An optional message to be recorded in the dump in case of failure. Can be null.
Ensures that the remaining stack space is large enough to execute
the average function.
how many times the calling function has recursed
The available stack space is insufficient to execute
the average function.
Where to place C# usings relative to namespace declaration, ignored by VB.
Specifies the desired placement of added imports.
Place imports inside the namespace definition.
Place imports outside the namespace definition.
Place imports outside the namespace definition, ignoring import aliases (which can stay inside the namespace).
Returns true if the tree already has an existing import syntactically equivalent to
in scope at . This includes
global imports for VB.
Given a context location in a provided syntax tree, returns the appropriate container
that should be added to.
Document-specific options for controlling the code produced by code generation.
Language agnostic defaults.
The name the underlying naming system came up with based on the argument itself.
This might be a name like "_value". We pass this along because it can help
later parts of the GenerateConstructor process when doing things like field hookup.
The name we think should actually be used for this parameter. This will include
stripping the name of things like underscores.
Return the most relevant declaration to namespaceOrType,
it will first search the context node contained within,
then the declaration in the same file, then non auto-generated file,
then all the potential location. Return null if no declaration.
General options for controlling the code produced by the that apply to all documents.
A location used to determine the best place to generate a member. This is only used for
determining which part of a partial type to generate in. If a type only has one part, or
an API is used that specifies the type, then this is not used. A part is preferred if
it surrounds this context location. If no part surrounds this location then a part is
preferred if it comes from the same SyntaxTree as this location. If there is no
such part, then any part may be used for generation.
This option is not necessary if or are
provided.
A hint to the code generation service to specify where the generated code should be
placed. Code will be generated after this location if the location is valid in the type
or symbol being generated into, and it is possible to generate the code after it.
If this option is provided, neither nor are
needed.
A hint to the code generation service to specify where the generated code should be
placed. Code will be generated before this location if the location is valid in the type
or symbol being generated into, and it is possible to generate the code after it.
If this option is provided, neither nor are
needed.
True if the code generation service should add ,
and when not generating directly into a declaration, should try to automatically add imports to the file
for any generated code.
Defaults to true.
Contains additional imports to be automatically added. This is useful for adding
imports that are part of a list of statements.
True if members of a symbol should also be generated along with the declaration. If
false, only the symbol's declaration will be generated.
True if the code generator should merge namespaces which only contain other namespaces
into a single declaration with a dotted name. False if the nesting should be preserved
and each namespace declaration should be nested and should only have a single non-dotted
name.
Merging can only occur if the namespace only contains a single member that is also a
namespace.
True if the code generation should put multiple attributes in a single attribute
declaration, or if should have a separate attribute declaration for each attribute. For
example, in C# setting this to True this would produce "[Goo, Bar]" while setting it to
False would produce "[Goo][Bar]"
True if the code generator should always generate accessibility modifiers, even if they
are the same as the defaults for that symbol. For example, a private field in C# does
not need its accessibility specified as it will be private by default. However, if this
option is set to true 'private' will still be generated.
True if the code generator should generate empty bodies for methods along with the
method declaration. If false, only method declarations will be generated.
True if the code generator should generate documentation comments where available
True if the code generator should automatically attempt to choose the appropriate location
to insert members. If false and a generation location is not specified by AfterThisLocation,
or BeforeThisLocation, members will be inserted at the end of the destination definition.
If is , determines if members will be
sorted before being added to the end of the list of members.
True if the code generator should attempt to reuse the syntax of the constituent entities, such as members, access modifier tokens, etc. while attempting to generate code.
If any of the member symbols have zero declaring syntax references (non-source symbols) OR two or more declaring syntax references (partial definitions), then syntax is not reused.
If false, then the code generator will always synthesize a new syntax node and ignore the declaring syntax references.
Context and preferences.
Generates symbols that describe declarations to be generated.
Determines if the symbol is purely a code generation symbol.
Creates an event symbol that can be used to describe an event declaration.
Creates a property symbol that can be used to describe a property declaration.
Creates a field symbol that can be used to describe a field declaration.
Creates a constructor symbol that can be used to describe a constructor declaration.
Creates a destructor symbol that can be used to describe a destructor declaration.
Creates a method symbol that can be used to describe a method declaration.
Creates a method symbol that can be used to describe an operator declaration.
Creates a method symbol that can be used to describe a conversion declaration.
Creates a method symbol that can be used to describe a conversion declaration.
Creates a parameter symbol that can be used to describe a parameter declaration.
Creates a parameter symbol that can be used to describe a parameter declaration.
Creates a parameter symbol that can be used to describe a parameter declaration.
Creates a parameter symbol that can be used to describe a parameter declaration.
Creates a type parameter symbol that can be used to describe a type parameter declaration.
Creates a pointer type symbol that can be used to describe a pointer type reference.
Creates an array type symbol that can be used to describe an array type reference.
Creates an method type symbol that can be used to describe an accessor method declaration.
Create attribute data that can be used in describing an attribute declaration.
Creates a named type symbol that can be used to describe a named type declaration.
Creates a named type symbol that can be used to describe a named type declaration.
Creates a method type symbol that can be used to describe a delegate type declaration.
Creates a namespace symbol that can be used to describe a namespace declaration.
A generator used for creating or modifying member declarations in source.
Annotation placed on generated syntax.
Create a new solution where the declaration of the destination symbol has an additional event of the same signature as the specified event symbol.
Returns the document in the new solution where the destination symbol is declared.
Create a new solution where the declaration of the destination symbol has an additional field of the same signature as the specified field symbol.
Returns the document in the new solution where the destination symbol is declared.
Create a new solution where the declaration of the destination symbol has an additional method of the same signature as the specified method symbol.
Returns the document in the new solution where the destination symbol is declared.
Create a new solution where the declaration of the destination symbol has an additional property of the same signature as the specified property symbol.
Returns the document in the new solution where the destination symbol is declared.
Create a new solution where the declaration of the destination symbol has an additional named type of the same signature as the specified named type symbol.
Returns the document in the new solution where the destination symbol is declared.
Create a new solution where the declaration of the destination symbol has an additional named type of the same signature as the specified named type symbol.
Returns the document in the new solution where the destination symbol is declared.
Create a new solution where the declaration of the destination symbol has an additional namespace of the same signature as the specified namespace symbol.
Returns the document in the new solution where the destination symbol is declared.
Create a new solution where the declaration of the destination symbol has an additional namespace or type of the same signature as the specified namespace or type symbol.
Returns the document in the new solution where the destination symbol is declared.
Create a new solution where the declaration of the destination symbol has additional members of the same signature as the specified member symbols.
Returns the document in the new solution where the destination symbol is declared.
Returns true if additional declarations can be added to the destination symbol's declaration.
Returns a newly created event declaration node from the provided event.
Returns a newly created field declaration node from the provided field.
Returns a newly created method declaration node from the provided method.
TODO: do not return null (https://github.com/dotnet/roslyn/issues/58243)
Returns a newly created property declaration node from the provided property.
Returns a newly created named type declaration node from the provided named type.
Returns a newly created namespace declaration node from the provided namespace.
Adds an event into destination.
Adds a field into destination.
Adds a method into destination.
Adds a property into destination.
Adds a named type into destination.
Adds a namespace into destination.
Adds members into destination.
Adds the parameters to destination.
Adds the attributes to destination.
Remove the given attribute from destination.
Remove the given attribute from destination.
Update the modifiers list for the given declaration node.
Update the accessibility modifiers for the given declaration node, retaining the trivia of the existing modifiers.
Update the type for the given declaration node.
Replace the existing members with the given newMembers for the given declaration node.
Adds the statements to destination.
Adds a field with the provided signature into destination.
Adds a field with the provided signature into destination.
Adds a method with the provided signature into destination.
Adds a property with the provided signature into destination.
Adds a named type into destination.
Adds a named type into destination.
Adds a namespace into destination.
Adds a namespace or type into destination.
Adds all the provided members into destination.
true if destination is a location where other symbols can be added to.
true if destination is a location where other symbols can be added to.
Return the most relevant declaration to namespaceOrType,
it will first search the context node contained within,
then the declaration in the same file, then non auto-generated file,
then all the potential location. Return null if no declaration.
When we are generating literals, we sometimes want to emit code vs. the numeric literal. This class
gives the constants for all ones we want to convert
Annotation placed on s that the converts to a node. This
information tracks the original nullable state of the symbol and is used by metadata-as-source to determine if
it needs to add #nullable directives in the file.
For string~ types.
For string! or string? types.
Base representation of an editorconfig file that has been parsed
The kind of options that we expect to encounter in the editorconfig file.
The full path to the editorconfig file on disk. Optional if not doing pathwise comparisons
The set of options that were discovered in the file.
Base representation of an editorconfig file that has been parsed
The kind of options that we expect to encounter in the editorconfig file.
The full path to the editorconfig file on disk. Optional if not doing pathwise comparisons
The set of options that were discovered in the file.
The full path to the editorconfig file on disk. Optional if not doing pathwise comparisons
The set of options that were discovered in the file.
Attempts to find a section of the editorconfig file that is an exact match for the given language.
Attempts to find a section of the editorconfig file that applies to the given language for the given criteria.
Attempts to find a section of the editorconfig file that applies to the given file.
Attempts to find a section of the editorconfig file that applies to the given file for the given criteria.
Base option that all editorconfig option inherit from.
Base option that all editorconfig option inherit from.
An abstraction over an editorconfig option that reprsents some type and the span in which that option was defined.
An abstraction over an editorconfig option that reprsents some type and the span in which that option was defined.
Represents a completed parse of a single editorconfig document
The full file path to the file on disk. Can be null if you never need to compare if a section is valid for pathing reasons
The set of naming style options that were parsed in the file
Represents a completed parse of a single editorconfig document
The full file path to the file on disk. Can be null if you never need to compare if a section is valid for pathing reasons
The set of naming style options that were parsed in the file
The full file path to the file on disk. Can be null if you never need to compare if a section is valid for pathing reasons
The set of naming style options that were parsed in the file
Parses a string and returns all discovered naming style options and their locations
The text contents of the editorconfig file.
The full path to the editorconfig file on disk.
A type that represents all discovered naming style options in the given string.
Parses a and returns all discovered naming style options and their locations
The contents of the editorconfig file.
The full path to the editorconfig file on disk.
A type that represents all discovered naming style options in the given .
The root naming style option composed of several settings as well as a s describing where they were all defined.
The section of the editorconfig file this option applies to.
The name given to thie option in the file.
The kinds of symbols this option applies to.
The rules about how the specified symbols must be named.
The keve of build error that should be produced when a matching symbol does not meetthe naming requirements.
The root naming style option composed of several settings as well as a s describing where they were all defined.
The section of the editorconfig file this option applies to.
The name given to thie option in the file.
The kinds of symbols this option applies to.
The rules about how the specified symbols must be named.
The keve of build error that should be produced when a matching symbol does not meetthe naming requirements.
The name given to thie option in the file.
The kinds of symbols this option applies to.
The rules about how the specified symbols must be named.
The keve of build error that should be produced when a matching symbol does not meetthe naming requirements.
A description of the kinds of symbols a rule should apply to as well as a s describing where they were all defined.
The name given to thie option in the file.
The kinds of symbols this option applies to.
The accessibilities of symbols this option applies to.
The required modifier that must be present on symbols this option applies to.
A description of the kinds of symbols a rule should apply to as well as a s describing where they were all defined.
The name given to thie option in the file.
The kinds of symbols this option applies to.
The accessibilities of symbols this option applies to.
The required modifier that must be present on symbols this option applies to.
The name given to thie option in the file.
The kinds of symbols this option applies to.
The accessibilities of symbols this option applies to.
The required modifier that must be present on symbols this option applies to.
The rules about how the specified symbols must be named as well as a s describing where they were all defined.
The name given to thie option in the file.
Required suffix
Required prefix
Required word separator characters
The capitalization scheme
The rules about how the specified symbols must be named as well as a s describing where they were all defined.
The name given to thie option in the file.
Required suffix
Required prefix
Required word separator characters
The capitalization scheme
The name given to thie option in the file.
Required suffix
Required prefix
Required word separator characters
The capitalization scheme
Returns the default section header text for the given language combination
The language combination to find the default header text for.
the default header text.
Checks where this header supports the given language for the given match criteria
The language to check support for.
The criteria for which we consider a language a mache the default is .
If this section is a match for the given language, meaning options can be added here.
Checks where this header supports the given file path for the given match criteria
full path to a file
The criteria for which we consider a language a mache the default is .
If the section header cannot be parsed because it it invalid this method will always return no match.
If no file path was given in the operation that produces this section and a relative path comparison is required to check for support this method will return no match.
If this section is a match for the given file, meaning options can be added here.
Most exact section match for a language. Will always match all files for the given language.
- for C# this is [*.cs]
- for Visual Basic it is [*.vb].
- If both language are specified it is [*.{cs,vb}]
Exact section match for a language with unknown file patterns. Will always match all files for the given language.
An exact match but with some unknown file patterns also matching
example for C#: [*.{cs,csx}]
This will not be the case if only C# was specified and a VB pattern is found
(or vice versa)
An exact section match for a language with other known language patterns. Will match all files for the given language as well as other known languages.
Given this pattern [*.{cs,vb}] for C# this is considered a match (since it matches all C# files).
Even though it also matches for Visual Basic.
Matches the file pattern according to the editorconfig specification but is a superset of an exact language match.
Patterns such as [*c*] or [*s] would match for C# in this case (being a superset of *.cs)
Matches the file pattern according to the editorconfig specification but is a supset of an exact language match.
Patterns such as [*.Tests.cs] would match for C# if the file being considered is UnitTests.cs
Matches [*].
Matched because section is global and therefore always matches.
Matches any valid pattern except for global section.
Matches any valid pattern.
Section did not match and is not applicable to the file or language.
Represents an error in a embedded language snippet. The error contains the message to show
a user as well as the span of the error. This span is in actual user character coordinates.
For example, if the user has the string "...\\p{0}..." then the span of the error would be
for the range of characters for '\\p{0}' (even though the regex engine would only see the \\
translated as a virtual char to the single \ character.
Represents an error in a embedded language snippet. The error contains the message to show
a user as well as the span of the error. This span is in actual user character coordinates.
For example, if the user has the string "...\\p{0}..." then the span of the error would be
for the range of characters for '\\p{0}' (even though the regex engine would only see the \\
translated as a virtual char to the single \ character.
Retrieves only nodes, skipping the separator tokens
Root of the embedded language syntax hierarchy. EmbeddedSyntaxNodes are very similar to
Roslyn Red-Nodes in concept, though there are differences for ease of implementation.
Similarities:
1. Fully representative of the original source. All source VirtualChars are contained
in the Regex nodes.
2. Specific types for Nodes, Tokens and Trivia.
3. Uniform ways of deconstructing Nodes (i.e. ChildCount + ChildAt).
Differences:
Note: these differences are not required, and can be changed if felt to be valuable.
1. No parent pointers. These have not been needed yet.
2. No Update methods. These have not been needed yet.
3. No direct ways to get Positions/Spans of node/token/trivia. Instead, that information can
be acquired from the VirtualChars contained within those constructs. This does mean that
an empty node (for example, an empty RegexSequenceNode) effect has no way to simply ascertain
its location. So far that hasn't been a problem.
4. No null nodes. Haven't been needed so far, and it keeps things extremely simple. For
example where Roslyn might have chosen an optional null child, the Regex hierarchy just
has multiple nodes. For example there are distinct nodes to represent the very similar
{a} {a,} {a,b} constructs.
Returns the string representation of this node, not including its leading and trailing trivia.
The string representation of this node, not including its leading and trailing trivia.
The length of the returned string is always the same as Span.Length
Returns full string representation of this node including its leading and trailing trivia.
The full string representation of this node including its leading and trailing trivia.
The length of the returned string is always the same as FullSpan.Length
Writes the node to a stringbuilder.
If false, leading trivia will not be added
If false, trailing trivia will not be added
Returns the value of the token. For example, if the token represents an integer capture,
then this property would return the actual integer.
Writes the token to a stringbuilder.
If false, leading trivia will not be added
If false, trailing trivia will not be added
Trivia on an .
A place for diagnostics to be stored during parsing. Not intended to be accessed
directly. These will be collected and aggregated into
Returns if the next two characters at tokenText[index] are {{ or
}}. If so, will contain the span of those two characters (based on starting at ).
Helper to convert simple string literals that escape quotes by doubling them. This is
how normal VB literals and c# verbatim string literals work.
The start characters string. " in VB and @" in C#
Returns the number of characters to jump forward (either 1 or 2);
Abstraction to allow generic algorithms to run over a string or without any
overhead.
Helper service that takes the raw text of a string token and produces the individual
characters that raw string token represents (i.e. with escapes collapsed). The difference
between this and the result from token.ValueText is that for each collapsed character
returned the original span of text in the original token can be found. i.e. if you had the
following in C#:
"G\u006fo"
Then you'd get back:
'G' -> [0, 1) 'o' -> [1, 7) 'o' -> [7, 1)
This allows for embedded language processing that can refer back to the users' original code
instead of the escaped value we're processing.
Takes in a string token and return the s corresponding to each
char of the tokens . In other words, for each char
in ValueText there will be a VirtualChar in the resultant array. Each VirtualChar will
specify what char the language considers them to represent, as well as the span of text
in the original that the language created that char from.
For most chars this will be a single character span. i.e. 'c' -> 'c'. However, for
escapes this may be a multi character span. i.e. 'c' -> '\u0063'
If the token is not a string literal token, or the string literal has any diagnostics on
it, then will be returned. Additionally, because a
VirtualChar can only represent a single char, while some escape sequences represent
multiple chars, will also be returned in those cases. All
these cases could be relaxed in the future. But they greatly simplify the
implementation.
If this function succeeds, certain invariants will hold. First, each character in the
sequence of characters in .ValueText will become a single
VirtualChar in the result array with a matching property.
Similarly, each VirtualChar's will abut each other, and
the union of all of them will cover the span of the token's
*not* including the start and quotes.
In essence the VirtualChar array acts as the information explaining how the of the token between the quotes maps to each character in the
token's .
Produces the appropriate escape version of to be placed in a
normal string literal. For example if is the tab
character, then this would produce t as \t is what would go into a string
literal.
provides a uniform view of a language's string token characters regardless if they
were written raw in source, or are the production of a language escape sequence. For example, in C#, in a
normal "" string a Tab character can be written either as the raw tab character (value 9 in
ASCII), or as \t. The format is a single character in the source, while the latter is two characters
(\ and t). will represent both, providing the raw
value of 9 as well as what in the original they occupied.
A core consumer of this system is the Regex parser. That parser wants to work over an array of characters,
however this array of characters is not the same as the array of characters a user types into a string in C# or
VB. For example In C# someone may write: @"\z". This should appear to the user the same as if they wrote "\\z"
and the same as "\\\u007a". However, as these all have wildly different presentations for the user, there needs
to be a way to map back the characters it sees ( '\' and 'z' ) back to the ranges of characters the user wrote.
The value of this as a if such a representation is possible.
s can represent Unicode codepoints that can appear in a except for
unpaired surrogates. If an unpaired high or low surrogate character is present, this value will be . The value of this character can be retrieved from
.
The unpaired high or low surrogate character that was encountered that could not be represented in . If is not , this will be 0.
The span of characters in the original that represent this .
Creates a new from the provided . This operation cannot
fail.
Creates a new from an unpaired high or low surrogate character. This will throw
if is not actually a surrogate character. The resultant
value will be .
Retrieves the scaler value of this character as an . If this is an unpaired surrogate
character, this will be the value of that surrogate. Otherwise, this will be the value of our .
Represents the individual characters that raw string token represents (i.e. with escapes collapsed).
The difference between this and the result from token.ValueText is that for each collapsed character
returned the original span of text in the original token can be found. i.e. if you had the
following in C#:
"G\u006fo"
Then you'd get back:
'G' -> [0, 1) 'o' -> [1, 7) 'o' -> [7, 1)
This allows for embedded language processing that can refer back to the user's original code
instead of the escaped value we're processing.
Abstraction over a contiguous chunk of s. This
is used so we can expose s over an
or over a . The latter is especially useful for reducing
memory usage in common cases of string tokens without escapes.
Thin wrapper over an actual .
This will be the common construct we generate when getting the
for a string token that has escapes in it.
Thin wrapper over an actual .
This will be the common construct we generate when getting the
for a string token that has escapes in it.
Represents a on top of a normal
string. This is the common case of the type of the sequence we would
create for a normal string token without any escapes in it.
The underlying string that we're returning virtual chars from. Note:
this will commonly include things like quote characters. Clients who
do not want that should then ask for an appropriate
back that does not include those characters.
Represents a on top of a normal
string. This is the common case of the type of the sequence we would
create for a normal string token without any escapes in it.
The underlying string that we're returning virtual chars from. Note:
this will commonly include things like quote characters. Clients who
do not want that should then ask for an appropriate
back that does not include those characters.
The actual characters that this is a portion of.
The portion of that is being exposed. This span
is `[inclusive, exclusive)`.
Gets the number of elements contained in the .
Gets the at the specified index.
Gets a value indicating whether the was declared but not initialized.
Retreives a sub-sequence from this .
Finds the virtual char in this sequence that contains the position. Will return null if this position is not
in the span of this sequence.
Create a from the .
Combines two s, producing a final
sequence that points at the same underlying data, but spans from the
start of to the end of .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire file)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Returns true if the given should be analyzed for the given ,
i.e. either of the following is true:
- is (we are analyzing the entire tree)
OR
- intersects with .
Gets the root node in the analysis span for the given .
Gets the root node in the analysis span for the given .
Gets the root node in the analysis span for the given .
NOTE: This method expects
and to be non-null.
Gets the root node in the analysis span for the given .
NOTE: This method expects
and to be non-null.
Gets the root node in the analysis span for the given .
Gets the root node in the analysis span for the given .
Gets the root node in the analysis span for the given .
Gets the root node in the analysis span for the given .
Gets the root node in the analysis span for the given .
Gets the root node in the analysis span for the given .
Returns if the language specific should be deferred to figure out indentation. If so, it
will be asked to the resultant
provided by this method.
An indentation result represents where the indent should be placed. It conveys this through
a pair of values. A position in the existing document where the indent should be relative,
and the number of columns after that the indent should be placed at.
This pairing provides flexibility to the implementor to compute the indentation results in
a variety of ways. For example, one implementation may wish to express indentation of a
newline as being four columns past the start of the first token on a previous line. Another
may wish to simply express the indentation as an absolute amount from the start of the
current line. With this tuple, both forms can be expressed, and the implementor does not
have to convert from one to the other.
An indentation result represents where the indent should be placed. It conveys this through
a pair of values. A position in the existing document where the indent should be relative,
and the number of columns after that the indent should be placed at.
This pairing provides flexibility to the implementor to compute the indentation results in
a variety of ways. For example, one implementation may wish to express indentation of a
newline as being four columns past the start of the first token on a previous line. Another
may wish to simply express the indentation as an absolute amount from the start of the
current line. With this tuple, both forms can be expressed, and the implementor does not
have to convert from one to the other.
The base position in the document that the indent should be relative to. This position
can occur on any line (including the current line, or a previous line).
The number of columns the indent should be at relative to the BasePosition's column.
Determines the desired indentation of a given line.
Get's the preferred indentation for if that token were on its own line. This
effectively simulates where the token would be if the user hit enter at the start of the token.
Well known encodings. Used to distinguish serialized encodings with BOM and without BOM.
Returns the precedence of the given expression, mapped down to one of the
values. The mapping is language specific.
Returns the precedence of this expression in a scale specific to a particular
language. These values cannot be compared across languages, but relates the
precedence of expressions in the same language. A smaller value means lower
precedence.
Runs dataflow analysis for the given on the given .
Control flow graph on which to execute analysis.
Dataflow analyzer.
Block analysis data at the end of the exit block.
Algorithm for this CFG walker has been forked from 's internal
implementation for basic block reachability computation: "MarkReachableBlocks",
we should keep them in sync as much as possible.
Analyzer to execute custom dataflow analysis on a control flow graph.
Custom data tracked for each basic block with values at start of the block.
Gets current analysis data for the given basic block, or an empty analysis data.
Gets empty analysis data for first analysis pass on a basic block.
Updates the current analysis data for the given basic block.
Analyze the given basic block and return the block analysis data at the end of the block for its successors.
Analyze the non-conditional fallthrough successor branch for the given basic block
and return the block analysis data for the branch destination.
Analyze the given conditional branch for the given basic block and return the
block analysis data for the branch destinations for the fallthrough and
conditional successor branches.
Merge the given block analysis data instances to produce the resultant merge data.
Returns true if both the given block analysis data instances should be considered equivalent by analysis.
Flag indicating if the dataflow analysis should run on unreachable blocks.
Indicates the kind of flow capture in an .
Indicates an R-Value flow capture, i.e. capture of a symbol's value.
Indicates an L-Value flow capture, i.e. captures of a symbol's location/address.
Indicates both an R-Value and an L-Value flow capture, i.e. captures of a symbol's value and location/address.
These are generated for left of a compound assignment operation, such that there is conditional code on the right side of the compound assignment.
Helper class to detect s that are l-value captures.
L-value captures are essentially captures of a symbol's location/address.
Corresponding s which share the same
as this flow capture, dereferences and writes to this location
subsequently in the flow graph.
For example, consider the below code:
a[i] = x ?? a[j];
The control flow graph contains an initial flow capture of "a[i]" to capture the l-value
of this array element:
FC0 (a[i])
Then it evaluates the right hand side, which can have different
values on different control flow paths, and the resultant value is then written
to the captured location:
FCR0 = result
NOTE: This type is a workaround for https://github.com/dotnet/roslyn/issues/31007
and it can be deleted once that feature is implemented.
Analysis to compute all the symbol writes for local and parameter
symbols in an executable code block, along with the information of whether or not the definition
may be read on some control flow path.
Core analysis data to drive the operation
for operation tree based analysis OR control flow graph based analysis.
Pooled allocated during analysis with the
current instance, which will be freed during .
Set of locals/parameters which are passed by reference to other method calls.
Map from each (symbol, write) to a boolean indicating if the value assigned
at the write is read on some control flow path.
For example, consider the following code:
int x = 0;
x = 1;
Console.WriteLine(x);
This map will have two entries for 'x':
1. Key = (symbol: x, write: 'int x = 0')
Value = 'false', because value assigned to 'x' here **is never** read.
2. Key = (symbol: x, write: 'x = 1')
Value = 'true', because value assigned to 'x' here **may be** read on
some control flow path.
Set of locals/parameters that are read at least once.
Set of lambda/local functions whose invocations are currently being analyzed to prevent
infinite recursion for analyzing code with recursive lambda/local function calls.
Current block analysis data used for analysis.
Block analysis data used for an additional conditional branch.
Creates an immutable for the current analysis data.
Resets all the currently tracked symbol writes to be conservatively marked as read.
Analysis data for a particular for
based dataflow analysis OR for the entire executable code block for high level operation
tree based analysis.
Map from each symbol to possible set of reachable write operations that are live at current program point.
A write is live if there is no intermediate write operation that overwrites it.
Gets the currently reachable writes for the given symbol.
Marks the given symbol write as a new unread write operation,
potentially clearing out the prior write operations if is false.
Same as , except this avoids allocations by
enumerating the set directly with a no-alloc enumerator.
Runs dataflow analysis on the given control flow graph to compute symbol usage results
for symbol read/writes.
Runs a fast, non-precise operation tree based analysis to compute symbol usage results
for symbol read/writes.
Dataflow analysis to compute symbol usage information (i.e. reads/writes) for locals/parameters
in a given control flow graph, along with the information of whether or not the writes
may be read on some control flow path.
Map from basic block to current for dataflow analysis.
Callback to analyze lambda/local function invocations and return new block analysis data.
Map from flow capture ID to set of captured symbol addresses along all possible control flow paths.
Map from operations to potential delegate creation targets that could be invoked via delegate invocation
on the operation.
Used to analyze delegate creations/invocations of lambdas and local/functions defined in a method.
Map from local functions to the where the local function was accessed
to create an invocable delegate. This control flow graph is required to lazily get or create the
control flow graph for this local function at delegate invocation callsite.
Map from lambdas to the where the lambda was defined
to create an invocable delegate. This control flow graph is required to lazily get or create the
control flow graph for this lambda at delegate invocation callsite.
Map from basic block range to set of writes within this block range.
Used for try-catch-finally analysis, where start of catch/finally blocks should
consider all writes in the corresponding try block as reachable.
Flow captures for l-value or address captures.
Special handling to ensure that at start of catch/filter/finally region analysis,
we mark all symbol writes from the corresponding try region as reachable in the
catch/filter/finally region.
Operations walker used for walking high-level operation tree
as well as control flow graph based operations.
Map from each symbol write to a boolean indicating if the value assinged
at write is used/read on some control flow path.
For example, consider the following code:
int x = 0;
x = 1;
Console.WriteLine(x);
This map will have two entries for 'x':
1. Key = (symbol: x, write: 'int x = 0')
Value = 'false', because value assigned to 'x' here **is never** read.
2. Key = (symbol: x, write: 'x = 1')
Value = 'true', because value assigned to 'x' here **may be** read on
some control flow path.
Set of locals/parameters that are read at least once.
Gets symbol writes that have are never read.
WriteOperation will be null for the initial value write to parameter symbols from the callsite.
Returns true if the initial value of the parameter from the caller is used.
Gets the write count for a given local/parameter symbol.
A is a lightweight identifier for a symbol that can be used to
resolve the "same" symbol across compilations. Different symbols have different concepts
of "same-ness". Same-ness is recursively defined as follows:
- Two s are the "same" if they have
the "same" and
equal .
- Two s are the "same" if
they have equal .Name
- Two s are the "same" if they have
the "same" and
equal .
- Two s are the "same" if they have
the "same" ,
equal ,
equal ,
the "same" , and have
the "same" s and
equal s.
- Two s are the "same" if they have
the "same" .
is not used because module identity is not important in practice.
- Two s are the "same" if they have
the "same" ,
equal ,
equal and
the "same" .
- Two s are the "same" if they have
the "same" and
equal .
If the is the global namespace for a
compilation, then it will only match another
global namespace of another compilation.
- Two s are the "same" if they have
the "same" and
equal .
- Two s are the "same" if they have
the "same" .
- Two s are the "same" if they have
the "same" the "same" ,
the "same" , and have
the "same" s and
the "same" s.
- Two are the "same" if they have
the "same" and
the "same" .
- Two s are the "same" if they have
the "same" and
the "same" .
- Two s are the "same" if they have
the "same" .
Interior-method-level symbols (i.e. , , and s can also
be represented and restored in a different compilation. To resolve these the destination compilation's is enumerated to list all the symbols with the same and as the original symbol. The symbol with the same index in the destination tree as the
symbol in the original tree is returned. This allows these sorts of symbols to be resolved in a way that is
resilient to basic forms of edits. For example, adding whitespace edits, or adding removing symbols with
different names and types. However, it may not find a matching symbol in the face of other sorts of edits.
Symbol keys cannot be created for interior-method symbols that were created in a speculative semantic model.
Due to issues arising from errors and ambiguity, it's possible for a SymbolKey to resolve to
multiple symbols. For example, in the following type:
class C
{
int M();
bool M();
}
The SymbolKey for both 'M' methods will be the same. The SymbolKey will then resolve to both methods.
s are not guaranteed to work across different versions of Roslyn. They can be persisted
in their form and used across sessions with the same version of Roslyn. However, future
versions may change the encoded format and may no longer be able to previous keys. As
such, only persist if using for a cache that can be regenerated if necessary.
The string values produced by (or ) should not be
directly compared for equality or used in hashing scenarios. Specifically, two symbol keys which represent the
'same' symbol might produce different strings. Instead, to compare keys use
to get a suitable comparer that exposes the desired semantics.
Anonymous functions and anonymous-delegates (the special VB synthesized delegate types),
only come into existence when someone has explicitly written a lambda in their source
code. So to appropriately round-trip this symbol we store the location that the lambda
was at so that we can find the symbol again when we resolve the key.
A is a lightweight identifier for a symbol that can be used to
resolve the "same" symbol across compilations. Different symbols have different concepts
of "same-ness". Same-ness is recursively defined as follows:
- Two s are the "same" if they have
the "same" and
equal .
- Two s are the "same" if
they have equal .Name
- Two s are the "same" if they have
the "same" and
equal .
- Two s are the "same" if they have
the "same" ,
equal ,
equal ,
the "same" , and have
the "same" s and
equal s.
- Two s are the "same" if they have
the "same" .
is not used because module identity is not important in practice.
- Two s are the "same" if they have
the "same" ,
equal ,
equal and
the "same" .
- Two s are the "same" if they have
the "same" and
equal .
If the is the global namespace for a
compilation, then it will only match another
global namespace of another compilation.
- Two s are the "same" if they have
the "same" and
equal .
- Two s are the "same" if they have
the "same" .
- Two s are the "same" if they have
the "same" the "same" ,
the "same" , and have
the "same" s and
the "same" s.
- Two are the "same" if they have
the "same" and
the "same" .
- Two s are the "same" if they have
the "same" and
the "same" .
- Two s are the "same" if they have
the "same" .
Interior-method-level symbols (i.e. , , and s can also
be represented and restored in a different compilation. To resolve these the destination compilation's is enumerated to list all the symbols with the same and as the original symbol. The symbol with the same index in the destination tree as the
symbol in the original tree is returned. This allows these sorts of symbols to be resolved in a way that is
resilient to basic forms of edits. For example, adding whitespace edits, or adding removing symbols with
different names and types. However, it may not find a matching symbol in the face of other sorts of edits.
Symbol keys cannot be created for interior-method symbols that were created in a speculative semantic model.
Due to issues arising from errors and ambiguity, it's possible for a SymbolKey to resolve to
multiple symbols. For example, in the following type:
class C
{
int M();
bool M();
}
The SymbolKey for both 'M' methods will be the same. The SymbolKey will then resolve to both methods.
s are not guaranteed to work across different versions of Roslyn. They can be persisted
in their form and used across sessions with the same version of Roslyn. However, future
versions may change the encoded format and may no longer be able to previous keys. As
such, only persist if using for a cache that can be regenerated if necessary.
The string values produced by (or ) should not be
directly compared for equality or used in hashing scenarios. Specifically, two symbol keys which represent the
'same' symbol might produce different strings. Instead, to compare keys use
to get a suitable comparer that exposes the desired semantics.
Current format version. Any time we change anything about our format, we should
change this. This will help us detect and reject any cases where a person serializes
out a SymbolKey from a previous version of Roslyn and then attempt to use it in a
newer version where the encoding has changed.
Constructs a new representing the provided .
Returns an that determines if two s
represent the same effective symbol.
Whether or not casing should be considered when comparing keys.
For example, with ignoreCase=true then X.SomeClass and X.Someclass would be
considered the same effective symbol
Whether or not the originating assembly of referenced
symbols should be compared when determining if two symbols are effectively the same.
For example, with ignoreAssemblyKeys=true then an X.SomeClass from assembly
A and X.SomeClass from assembly B will be considered the same
effective symbol.
Tries to resolve this in the given
to a matching symbol.
Returns this encoded as a string. This can be persisted
and used later with to then try to resolve back
to the corresponding in a future .
This string form is not guaranteed to be reusable across all future versions of
Roslyn. As such it should only be used for caching data, with the knowledge that
the data may need to be recomputed if the cached data can no longer be used.
For a symbol like System.Collections.Generic.IEnumerable, this would produce "Generic",
"Collections", "System"
Reads an array of symbols out from the key. Note: the number of symbols returned will either be the
same as the original amount written, or default will be returned. It will never be less or more.
default will be returned if any elements could not be resolved to the requested type in the provided .
Callers should the instance returned. No check is necessary if
default was returned before calling
If default is returned then will be non-null. Similarly, if
is non-null, then only default will be returned.
Writes out the provided symbols to the key. The array provided must not
be default.
The result of . If the could be uniquely mapped to a
single then that will be returned in . Otherwise, if the key resolves
to multiple symbols (which can happen in error scenarios), then and will be returned.
If no symbol can be found will be null and
will be empty.
Helpers used for public API argument validation.
Use to validate public API input for properties that are exposed as .
Use to validate public API input for properties that are exposed as and
whose items should be unique.
Helpers to create temporary streams backed by pooled memory
Sets the length of this stream (see . If is , the internal buffers will be left as is, and the data in them will be left as garbage.
If it is then any fully unused chunks will be discarded. If there is a final chunk
the stream is partway through, the remainder of that chunk will be zeroed out.
Implements an event that can be subscribed to without keeping the subscriber alive for the lifespan of
the object that declares .
Unlike handler created via the handlers may capture state, which makes the subscribers simpler
and doesn't risk accidental leaks.
Implements an event that can be subscribed to without keeping the subscriber alive for the lifespan of
the object that declares .
Unlike handler created via the handlers may capture state, which makes the subscribers simpler
and doesn't risk accidental leaks.
Each registered event handler has the lifetime of an associated owning object. This table ensures the weak
references to the event handlers are not cleaned up while the owning object is still alive.
Returns the for the given operation.
This extension can be removed once https://github.com/dotnet/roslyn/issues/25057 is implemented.
When referring to a variable, this method should only return a 'write' result if the variable is entirely
overwritten. Not if the variable is written through. For example, a write to a property on a struct
variable is not a write to the struct variable (though at runtime it might impact the value in some fashion).
Put another way, this only returns 'write' when certain that the entire value is absolutely
entirely overwritten.
Returns true if the given operation is a regular compound assignment,
i.e. such as a += b,
or a special null coalescing compound assignment, i.e.
such as a ??= b.
Walks down consecutive conversion operations until an operand is reached that isn't a conversion operation.
The starting operation.
The inner non conversion operation or the starting operation if it wasn't a conversion operation.
Provides information about the way a particular symbol is being used at a symbol reference node.
For namespaces and types, this corresponds to values from .
For methods, fields, properties, events, locals and parameters, this corresponds to values from .
Represents default value indicating no usage.
Represents a reference to a namespace or type on the left side of a dotted name (qualified name or member access).
For example, 'NS' in NS.Type x = new NS.Type(); or NS.Type.StaticMethod(); or
'Type' in Type.NestedType x = new Type.NestedType(); or Type.StaticMethod();
Represents a generic type argument reference.
For example, 'Type' in Generic{Type} x = ...; or class Derived : Base{Type} { }
Represents a type parameter constraint that is a type.
For example, 'Type' in class Derived{T} where T : Type { }
Represents a base type or interface reference in the base list of a named type.
For example, 'Base' in class Derived : Base { }.
Represents a reference to a type whose instance is being created.
For example, 'C' in var x = new C();, where 'C' is a named type.
Represents a reference to a namespace or type within a using or imports directive.
For example, using NS; or using static NS.Extensions or using Alias = MyType.
Represents a reference to a namespace name in a namespace declaration context.
For example, 'N1' or namespaces N1.N2 { }.
Represents default value indicating no usage.
Represents a value read.
For example, reading the value of a local/field/parameter.
Represents a value write.
For example, assigning a value to a local/field/parameter.
Represents a reference being taken for the symbol.
For example, passing an argument to an "in", "ref" or "out" parameter.
Represents a name-only reference that neither reads nor writes the underlying value.
For example, 'nameof(x)' or reference to a symbol 'x' in a documentation comment
does not read or write the underlying value stored in 'x'.
Represents a value read and/or write.
For example, an increment or compound assignment operation.
Represents a readable reference being taken to the value.
For example, passing an argument to an "in" or "ref readonly" parameter.
Represents a readable reference being taken to the value.
For example, passing an argument to an "out" parameter.
Represents a value read or write.
For example, passing an argument to a "ref" parameter.
this is RAII object to automatically release pooled object when its owning pool
Shared object pool for Roslyn
Use this shared pool if only concern is reducing object allocations.
if perf of an object pool itself is also a concern, use ObjectPool directly.
For example, if you want to create a million of small objects within a second,
use the ObjectPool directly. it should have much less overhead than using this.
pool that uses default constructor with 100 elements pooled
pool that uses default constructor with 20 elements pooled
pool that uses string as key with StringComparer.OrdinalIgnoreCase as key comparer
pool that uses string as element with StringComparer.OrdinalIgnoreCase as element comparer
pool that uses string as element with StringComparer.Ordinal as element comparer
Used to reduce the # of temporary byte[]s created to satisfy serialization and
other I/O requests
Determines if a is under-selected given .
Under-selection is defined as omitting whole nodes from either the beginning or the end. It can be used e.g. to
detect that following selection `1 + [|2 + 3|]` is under-selecting the whole expression node tree.
Returns false if only and precisely one is selected. In that case the is treated more as a caret location.
It's intended to be used in conjunction with that, for
non-empty selections, returns the smallest encompassing node. A node that can, for certain refactorings, be too
large given user-selection even though it is the smallest that can be retrieved.
When doesn't intersect the node in any way it's not considered to be
under-selected.
Null node is always considered under-selected.
Trims leading and trailing whitespace from .
Returns unchanged in case .
Returns empty Span with original in case it contains only whitespace.
Options customizing member display. Used by multiple features.
Options that we expect the user to set in editorconfig.
Opens a scope that will call with an instance of on once disposed. This is useful to
easily wrap a series of operations and now that progress will be reported no matter how it completes.
Returns for canMove if is local
declaration statement that can be moved forward to be closer to its first reference.
Moves closer to its first reference. Only
applicable if returned
true. If not, then the original document will be returned unchanged.
Returns an updated with all the
local declarations named '_' replaced with simple assignments to discard.
For example,
1. int _ = M(); is replaced with _ = M();
2. int x = 1, _ = M(), y = 2; is replaced with following statements:
int x = 1;
_ = M();
int y = 2;
This is normally done in context of a code transformation that generates new discard assignment(s),
such as _ = M();, and wants to prevent compiler errors where the containing method already
has a discard variable declaration, say var _ = M2(); at some line after the one
where the code transformation wants to generate new discard assignment(s), which would be a compiler error.
This method replaces such discard variable declarations with discard assignments.
Gets the declared symbol and root operation from the passed in declarationSyntax and calls . Note that this is bool and not bool? because we know that the symbol
is at the very least declared, so there's no need to return a null value.
Given an operation, goes through all descendent operations and returns true if the symbol passed in
is ever assigned a possibly null value as determined by nullable flow state. Returns
null if no references are found, letting the caller determine what to do with that information
Represents a content that has been parsed.
Used to front-load parsing and retrieval to a caller that has knowledge of whether or not these operations
should be performed synchronously or asynchronously. The is then passed to a feature whose implementation is entirely synchronous.
In general, any feature API that accepts should be synchronous and not access or snapshots.
In exceptional cases such API may be asynchronous as long as it completes synchronously in most common cases and async completion is rare. It is still desirable to improve the design
of such feature to either not be invoked on a UI thread or be entirely synchronous.
Represents a content that has been parsed.
Used to front-load parsing and retrieval to a caller that has knowledge of whether or not these operations
should be performed synchronously or asynchronously. The is then passed to a feature whose implementation is entirely synchronous.
In general, any feature API that accepts should be synchronous and not access or snapshots.
In exceptional cases such API may be asynchronous as long as it completes synchronously in most common cases and async completion is rare. It is still desirable to improve the design
of such feature to either not be invoked on a UI thread or be entirely synchronous.
Equivalent semantics to
Throws a non-accessible exception if the provided value is null. This method executes in
all builds
Throws a non-accessible exception if the provided value is null. This method executes in
all builds
Throws a non-accessible exception if the provided value is null. This method executes in
all builds
Throws a non-accessible exception if the provided value is null. This method executes in
all builds
Throws a non-accessible exception if the provided value is false. This method executes
in all builds
Throws a non-accessible exception if the provided value is false. This method executes
in all builds
Throws a non-accessible exception if the provided value is false. This method executes
in all builds
Throws a non-accessible exception if the provided value is true. This method executes in
all builds.
Throws a non-accessible exception if the provided value is true. This method executes in
all builds.
Throws a non-accessible exception if the provided value is true. This method executes in
all builds.
Creates an with information about an unexpected value.
The unexpected value.
The , which should be thrown by the caller.
Determine if an exception was an , and that the provided token caused the cancellation.
The exception to test.
Checked to see if the provided token was cancelled.
if the exception was an and the token was canceled.
A queue where items can be added to to be processed in batches after some delay has passed. When processing
happens, all the items added since the last processing point will be passed along to be worked on. Rounds of
processing happen serially, only starting up after a previous round has completed.
Failure to complete a particular batch (either due to cancellation or some faulting error) will not prevent
further batches from executing. The only thing that will permenantly stop this queue from processing items is if
the passed to the constructor switches to .
Delay we wait after finishing the processing of one batch and starting up on then.
Equality comparer uses to dedupe items if present.
Callback to actually perform the processing of the next batch of work.
Cancellation token controlling the entire queue. Once this is triggered, we don't want to do any more work
at all.
Cancellation series we use so we can cancel individual batches of work if requested. The client of the
queue can cancel existing work by either calling directly, or passing to . Work in the queue that has not started will be
immediately discarded. The cancellation token passed to will be triggered
allowing the client callback to cooperatively cancel the current batch of work it is performing.
Lock we will use to ensure the remainder of these fields can be accessed in a threadsafe
manner. When work is added we'll place the data into .
We'll then kick of a task to process this in the future if we don't already have an
existing task in flight for that.
Data added that we want to process in our next update task.
CancellationToken controlling the next batch of items to execute.
Used if is present to ensure only unique items are added to .
Task kicked off to do the next batch of processing of . These
tasks form a chain so that the next task only processes when the previous one completes.
Whether or not there is an existing task in flight that will process the current batch
of . If there is an existing in flight task, we don't need to
kick off a new one if we receive more work before it runs.
Callback to process queued work items. The list of items passed in is
guaranteed to always be non-empty.
Cancels any outstanding work in this queue. Work that has not yet started will never run. Work that is in
progress will request cancellation in a standard best effort fashion.
Waits until the current batch of work completes and returns the last value successfully computed from . If the last canceled or failed, then a
corresponding canceled or faulted task will be returned that propagates that outwards.
Produces a series of objects such that requesting a new token
causes the previously issued token to be cancelled.
Consuming code is responsible for managing overlapping asynchronous operations.
This class has a lock-free implementation to minimise latency and contention.
Initializes a new instance of .
An optional cancellation token that, when cancelled, cancels the last
issued token and causes any subsequent tokens to be issued in a cancelled state.
Determines if the cancellation series has an active token which has not been cancelled.
Creates the next in the series, ensuring the last issued
token (if any) is cancelled first.
An optional cancellation token that, when cancelled, cancels the
returned token.
A cancellation token that will be cancelled when either:
- is called again
- The token passed to this method (if any) is cancelled
- The token passed to the constructor (if any) is cancelled
- is called
This object has been disposed.
A custom awaiter that supports for
.
Returns an awaitable for the specified task that will never throw, even if the source task
faults or is canceled.
The task whose completion should signal the completion of the returned awaitable.
if set to true the continuation will be scheduled on the caller's context; false to always execute the continuation on the threadpool.
An awaitable.
An awaitable that wraps a task and never throws an exception when waited on.
The task.
A value indicating whether the continuation should be scheduled on the current sync context.
Initializes a new instance of the struct.
The task.
Whether the continuation should be scheduled on the current sync context.
Gets the awaiter.
The awaiter.
An awaiter that wraps a task and never throws an exception when waited on.
The task.
A value indicating whether the continuation should be scheduled on the current sync context.
Initializes a new instance of the struct.
The task.
if set to true [capture context].
Gets a value indicating whether the task has completed.
Schedules a delegate for execution at the conclusion of a task's execution.
The action.
Schedules a delegate for execution at the conclusion of a task's execution
without capturing the ExecutionContext.
The action.
Does nothing.
Returns an awaitable for the specified task that will never throw, even if the source task
faults or is canceled.
The task whose completion should signal the completion of the returned awaitable.
if set to the continuation will be scheduled on the caller's context; to always execute the continuation on the threadpool.
An awaitable.
Returns an awaitable for the specified task that will never throw, even if the source task
faults or is canceled.
The awaitable returned by this method does not provide access to the result of a successfully-completed
. To await without throwing and use the resulting value, the following
pattern may be used:
var methodValueTask = MethodAsync().Preserve();
await methodValueTask.NoThrowAwaitableInternal(true);
if (methodValueTask.IsCompletedSuccessfully)
{
var result = methodValueTask.Result;
}
else
{
var exception = methodValueTask.AsTask().Exception.InnerException;
}
The task whose completion should signal the completion of the returned awaitable.
if set to the continuation will be scheduled on the caller's context; to always execute the continuation on the threadpool.
An awaitable.
The type of the result.
An awaitable that wraps a task and never throws an exception when waited on.
The task.
A value indicating whether the continuation should be scheduled on the current sync context.
Initializes a new instance of the struct.
The task.
Whether the continuation should be scheduled on the current sync context.
Gets the awaiter.
The awaiter.
An awaiter that wraps a task and never throws an exception when waited on.
The task.
A value indicating whether the continuation should be scheduled on the current sync context.
Initializes a new instance of the struct.
The task.
if set to [capture context].
Gets a value indicating whether the task has completed.
Schedules a delegate for execution at the conclusion of a task's execution.
The action.
Schedules a delegate for execution at the conclusion of a task's execution
without capturing the ExecutionContext.
The action.
Does nothing.
An awaitable that wraps a and never throws an exception when waited on.
The type of the result.
The task.
A value indicating whether the continuation should be scheduled on the current sync context.
Initializes a new instance of the struct.
The task.
Whether the continuation should be scheduled on the current sync context.
Gets the awaiter.
The awaiter.
An awaiter that wraps a task and never throws an exception when waited on.
The type of the result.
The task.
A value indicating whether the continuation should be scheduled on the current sync context.
Initializes a new instance of the struct.
The task.
if set to [capture context].
Gets a value indicating whether the task has completed.
Schedules a delegate for execution at the conclusion of a task's execution.
The action.
Schedules a delegate for execution at the conclusion of a task's execution
without capturing the ExecutionContext.
The action.
Does nothing.
Explicitly indicates result is void
Implements ConfigureAwait(bool) for . The resulting behavior in asynchronous code
is the same as one would expect for .
The awaitable provided by .
An object used to await this yield.
An error occurred while reading the specified configuration file: {0}
Symbol "{0}" is not from source.
Cycle detected in extensions
Duplicate source file '{0}' in project '{1}'
Removing projects is not supported.
Adding projects is not supported.
Workspace error
Error reading content of source file '{0}' -- '{1}'.
Workspace is not empty.
'{0}' is not part of the workspace.
'{0}' is already part of the workspace.
'{0}' is not referenced.
'{0}' is already referenced.
Adding project reference from '{0}' to '{1}' will cause a circular reference.
Metadata is not referenced.
Metadata is already referenced.
{0} is not present.
{0} is already present.
The specified document is not a version of this document.
The language '{0}' is not supported.
The solution already contains the specified project.
The solution does not contain the specified project.
The project already references the target project.
The project already contains the specified reference.
A project may not reference itself.
The solution already contains the specified reference.
Temporary storage cannot be written more than once.
'{0}' is not open.
File was externally modified: {0}.
Unrecognized language name.
Can't resolve metadata reference: '{0}'.
Can't resolve analyzer reference: '{0}'.
Expected {0}.
"{0}" must be a non-null and non-empty string.
This submission already references another submission project.
Only submission project can reference submission projects.
{0} still contains open documents.
{0} is still open.
Cannot open project '{0}' because the file extension '{1}' is not associated with a language.
Cannot open project '{0}' because the language '{1}' is not supported.
Invalid project file path: '{0}'
Invalid solution file path: '{0}'
Project file not found: '{0}'
Solution file not found: '{0}'
TODO: Unmerged change from project '{0}'
After
Before:
Adding additional documents is not supported.
Adding analyzer config documents is not supported.
Adding analyzer references is not supported.
Adding documents is not supported.
Adding project references is not supported.
Changing additional documents is not supported.
Changing analyzer config documents is not supported.
Changing documents is not supported.
Removing additional documents is not supported.
Removing analyzer config documents is not supported.
Removing analyzer references is not supported.
Removing documents is not supported.
Removing project references is not supported.
Service of type '{0}' is required to accomplish the task but is not available from '{1}' workspace.
At least one diagnostic must be supplied.
Diagnostic must have span '{0}'
Label for node '{0}' is invalid, it must be within [0, {1}).
Matching nodes '{0}' and '{1}' must have the same label.
Node '{0}' must be contained in the new tree.
Node '{0}' must be contained in the old tree.
The member '{0}' is not declared within the declaration of the symbol.
The position is not within the symbol's declaration
The symbol '{0}' cannot be located within the current solution.
Changing compilation options is not supported.
Changing parse options is not supported.
The node is not part of the tree.
This workspace does not support opening and closing documents.
Exceptions:
'{0}' returned an uninitialized ImmutableArray
Failure
Warning
Options did not come from specified Solution
Enable
Enable and ignore future errors
'{0}' encountered an error and has been disabled.
Show Stack Trace
Async Method
Error
None
Suggestion
File '{0}' size of {1} exceeds maximum allowed size of {2}
Changing document properties is not supported
Variables captured:
Refactoring Only
Remove the line below if you want to inherit .editorconfig settings from higher directories
Core EditorConfig Options
C# files
.NET Coding Conventions
Visual Basic files
Changing document '{0}' is not supported.
DateTimeKind must be Utc
Adding imports will bring an extension method into scope with the same name as '{0}'
{0} is in a different project.
Project does not contain specified reference
Solution does not contain specified reference
Unknown
Cannot apply action that is not in '{0}'
Symbol's project could not be found in the provided solution
The contents of a SourceGeneratedDocument may not be changed.
Rename '{0}' to '{1}'
Sync namespace to folder structure
CodeAction '{0}' did not produce a changed solution
Predefined conversion from {0} to {1}.
'FixAllScope.ContainingType' and 'FixAllScope.ContainingMember' are not supported with this constructor
'FixAllScope.Custom' is not supported with this API
Failed to resolve rename conflicts
Use 'TextDocument' property instead of 'Document' property as the provider supports non-source text documents.
Unexpected value '{0}' in DocumentKinds array.
Running code cleanup on fixed documents
Applying changes to {0}
Removing compilation options is not supported
Removing parse options is not supported
Changing project language is not supported
Changing project between ordinary and interactive submission is not supported
Absolute path expected
Absolute path expected.
Organize usings
this. and Me. preferences
Language keywords vs BCL types preferences
Parentheses preferences
Modifier preferences
Expression-level preferences
Field preferences
Parameter preferences
Suppression preferences
Pascal Case
Abstract Method
Begins with I
Class
Delegate
Enum
Event
Interface
Non-Field Members
Private Method
Private or Internal Field
Private or Internal Static Field
Property
Public or Protected Field
Static Field
Static Method
Struct
Types
Method
Missing prefix: '{0}'
Missing suffix: '{0}'
Prefix '{0}' does not match expected prefix '{1}'
Prefix '{0}' is not expected
These non-leading words must begin with an upper case letter: {0}
These non-leading words must begin with a lowercase letter: {0}
These words cannot contain lower case characters: {0}
These words cannot contain upper case characters: {0}
These words must begin with upper case characters: {0}
The first word, '{0}', must begin with an upper case character
The first word, '{0}', must begin with a lower case character
Cast is redundant.
Naming styles
Naming rules
Symbol specifications
Specified sequence has duplicate items
New line preferences
Instantiated part(s) threw exception(s) from IDisposable.Dispose().
Indentation and spacing
Value too large to be represented as a 30 bit unsigned integer.
A language name cannot be specified for this option.
A language name must be specified for this option.
Stream must support read and seek operations.
Supplied diagnostic cannot be null.
Fix all '{0}'
Fix all '{0}' in '{1}'
Fix all '{0}' in Solution
Fix all '{0}' in Containing member
Fix all '{0}' in Containing type
Compilation is required to accomplish the task but is not supported by project {0}.
Syntax tree is required to accomplish the task but is not supported by document {0}.
Project of ID {0} is required to accomplish the task but is not available from the solution
The solution does not contain the specified document.
Warning: Declaration changes scope and may change meaning.
Could not find location to generation symbol into.
Destination location was from a different tree.
Destination location was not in source.
Destination type must be a {0}, {1}, {2} or {3}, but given one is {4}.
Destination type must be a {0}, {1} or {2}, but given one is {3}.
Destination type must be a {0}, but given one is {1}.
Destination type must be a {0} or a {1}, but given one is {2}.
Invalid number of parameters for binary operator.
Invalid number of parameters for unary operator.
Location must be null or from source.
No available location found to add statements to.
No location provided to add statements to.
Cannot generate code for unsupported operator '{0}'
Namespace can not be added in this destination.
Type members
Document does not support syntax trees
Creates a for a weakly-held reference that has since been collected.
The hash code of the collected value.
A weak which was already collected.
Explicitly a reference type so that the consumer of this in can safely operate on an
instance without having to lock to ensure it sees the entirety of the value written out.
>
Explicitly a reference type so that the consumer of this in can safely operate on an
instance without having to lock to ensure it sees the entirety of the value written out.
>
A simple collection of values held as weak references. Objects in the set are compared by reference equality.
The type of object stored in the set.
This partial contains methods that must be shared by source with the workspaces layer
Given a path to an assembly, returns its MVID (Module Version ID).
May throw.
If the file at does not exist or cannot be accessed.
If the file is not an assembly or is somehow corrupted.
NOTE!!! adding duplicates will result in exceptions.
Being concurrent only allows accessing the dictionary without taking locks.
Duplicate keys are still not allowed in the hashtable.
If unsure about adding unique items use APIs such as TryAdd, GetOrAdd, etc...
Generally is a sufficient method for enforcing DEBUG
only invariants in our code. When it triggers that provides a nice stack trace for
investigation. Generally that is enough.
There are cases for which a stack is not enough and we need a full heap dump to
investigate the failure. This method takes care of that. The behavior is that when running
in our CI environment if the assert triggers we will rudely crash the process and
produce a heap dump for investigation.
This method is necessary to avoid an ambiguity between and .
This method is necessary to avoid an ambiguity between and .
Maps an immutable array through a function that returns ValueTask, returning the new ImmutableArray.
Maps an immutable array through a function that returns ValueTask, returning the new ImmutableArray.
Maps an immutable array through a function that returns ValueTask, returning the new ImmutableArray.
Returns the only element of specified sequence if it has exactly one, and default(TSource) otherwise.
Unlike doesn't throw if there is more than one element in the sequence.
Cached versions of commonly used delegates.
Cached versions of commonly used delegates.
Ensure that the given target value is initialized (not null) in a thread-safe manner.
The type of the target value. Must be a reference type.
The target to initialize.
A factory delegate to create a new instance of the target value. Note that this delegate may be called
more than once by multiple threads, but only one of those values will successfully be written to the target.
The target value.
Ensure that the given target value is initialized (not null) in a thread-safe manner.
The type of the target value. Must be a reference type.
The target to initialize.
The type of the argument passed to the value factory.
A factory delegate to create a new instance of the target value. Note that this delegate may be called
more than once by multiple threads, but only one of those values will successfully be written to the target.
An argument passed to the value factory.
The target value.
Ensure that the given target value is initialized in a thread-safe manner.
The target to initialize.
The value indicating is not yet initialized.
A factory delegate to create a new instance of the target value. Note that this delegate may be called
more than once by multiple threads, but only one of those values will successfully be written to the target.
An argument passed to the value factory.
The type of the argument passed to the value factory.
If returns a value equal to , future
calls to the same method may recalculate the target value.
The target value.
Ensure that the given target value is initialized in a thread-safe manner. This overload supports the
initialization of value types, and reference type fields where is considered an
initialized value.
The type of the target value.
A target value box to initialize.
A factory delegate to create a new instance of the target value. Note that this delegate may be called
more than once by multiple threads, but only one of those values will successfully be written to the target.
The target value.
Ensure that the given target value is initialized in a thread-safe manner. This overload supports the
initialization of value types, and reference type fields where is considered an
initialized value.
The type of the target value.
A target value box to initialize.
The type of the argument passed to the value factory.
A factory delegate to create a new instance of the target value. Note that this delegate may be called
more than once by multiple threads, but only one of those values will successfully be written to the target.
An argument passed to the value factory.
The target value.
Initialize the value referenced by in a thread-safe manner.
The value is changed to only if the current value is null.
Type of value.
Reference to the target location.
The value to use if the target is currently null.
The new value referenced by . Note that this is
nearly always more useful than the usual return from
because it saves another read to .
Initialize the value referenced by in a thread-safe manner.
The value is changed to only if the current value
is .
Type of value.
Reference to the target location.
The value to use if the target is currently uninitialized.
The uninitialized value.
The new value referenced by . Note that this is
nearly always more useful than the usual return from
because it saves another read to .
Initialize the immutable array referenced by in a thread-safe manner.
Elemental type of the array.
Reference to the target location.
The value to use if the target is currently uninitialized (default).
The new value referenced by . Note that this is
nearly always more useful than the usual return from
because it saves another read to .
Initialize the immutable array referenced by in a thread-safe manner.
Elemental type of the array.
Callback to produce the array if is 'default'. May be
called multiple times in the event of concurrent initialization of . Will not be
called if 'target' is already not 'default' at the time this is called.
The value of after initialization. If is
already initialized, that value value will be returned.
Initialize the immutable array referenced by in a thread-safe manner.
Elemental type of the array.
The type of the argument passed to the value factory.
Callback to produce the array if is 'default'. May be
called multiple times in the event of concurrent initialization of . Will not be
called if 'target' is already not 'default' at the time this is called.
The value of after initialization. If is
already initialized, that value value will be returned.
Compares objects based upon their reference identity.
A lazily initialized version of which uses the same space as a .
One of three values:
- 0. is not initialized yet.
- 1. is currently being initialized by some thread.
- 2. has been initialized.
Actual stored value. Only safe to read once is set to 2.
Ensure that the given target value is initialized in a thread-safe manner.
A factory delegate to create a new instance of the target value. Note that this
delegate may be called more than once by multiple threads, but only one of those values will successfully be
written to the target.
The target value.
An alternative approach here would be to pass and into
, and to only compute the value if the winning thread. However, this has two potential
downsides. First, the computation of the value might take an indeterminate amount of time. This would require
other threads to then busy-spin for that same amount of time. Second, we would have to make the code very
resilient to failure paths (including cancellation), ensuring that the type reset itself safely to the
initial state so that other threads were not perpetually stuck in the busy state.
Checks if the given name is a sequence of valid CLR names separated by a dot.
Remove one set of leading and trailing double quote characters, if both are present.
Implements and static members that are only available in .NET 5.
This is how VB Anonymous Types combine hash values for fields.
This is how VB Anonymous Types combine hash values for fields.
PERF: Do not use with enum types because that involves multiple
unnecessary boxing operations. Unfortunately, we can't constrain
T to "non-enum", so we'll use a more restrictive constraint.
The offset bias value used in the FNV-1a algorithm
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The generative factor used in the FNV-1a algorithm
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
Compute the FNV-1a hash of a sequence of bytes
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The sequence of bytes
The FNV-1a hash of
Compute the FNV-1a hash of a sequence of bytes and determines if the byte
sequence is valid ASCII and hence the hash code matches a char sequence
encoding the same text.
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The sequence of bytes that are likely to be ASCII text.
True if the sequence contains only characters in the ASCII range.
The FNV-1a hash of
Compute the FNV-1a hash of a sequence of bytes
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The sequence of bytes
The FNV-1a hash of
Compute the hashcode of a sub-string using FNV-1a
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
Note: FNV-1a was developed and tuned for 8-bit sequences. We're using it here
for 16-bit Unicode chars on the understanding that the majority of chars will
fit into 8-bits and, therefore, the algorithm will retain its desirable traits
for generating hash codes.
Compute the hashcode of a sub-string using FNV-1a
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
Note: FNV-1a was developed and tuned for 8-bit sequences. We're using it here
for 16-bit Unicode chars on the understanding that the majority of chars will
fit into 8-bits and, therefore, the algorithm will retain its desirable traits
for generating hash codes.
The input string
The start index of the first character to hash
The number of characters, beginning with to hash
The FNV-1a hash code of the substring beginning at and ending after characters.
Compute the hashcode of a sub-string using FNV-1a
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The input string
The start index of the first character to hash
The FNV-1a hash code of the substring beginning at and ending at the end of the string.
Compute the hashcode of a string using FNV-1a
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The input string
The FNV-1a hash code of
Compute the hashcode of a string using FNV-1a
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The input string
The FNV-1a hash code of
Compute the hashcode of a sub string using FNV-1a
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The input string as a char array
The start index of the first character to hash
The number of characters, beginning with to hash
The FNV-1a hash code of the substring beginning at and ending after characters.
Compute the hashcode of a single character using the FNV-1a algorithm
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
Note: In general, this isn't any more useful than "char.GetHashCode". However,
it may be needed if you need to generate the same hash code as a string or
substring with just a single character.
The character to hash
The FNV-1a hash code of the character.
Combine a string with an existing FNV-1a hash code
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The accumulated hash code
The string to combine
The result of combining with using the FNV-1a algorithm
Combine a char with an existing FNV-1a hash code
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The accumulated hash code
The new character to combine
The result of combining with using the FNV-1a algorithm
Combine a string with an existing FNV-1a hash code
See http://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function
The accumulated hash code
The string to combine
The result of combining with using the FNV-1a algorithm
Search a sorted integer array for the target value in O(log N) time.
The array of integers which must be sorted in ascending order.
The target value.
An index in the array pointing to the position where should be
inserted in order to maintain the sorted order. All values to the right of this position will be
strictly greater than . Note that this may return a position off the end
of the array if all elements are less than or equal to .
Parse the value provided to an MSBuild Feature option into a list of entries. This will
leave name=value in their raw form.
A concurrent, simplified HashSet.
The default concurrency level is 2. That means the collection can cope with up to two
threads making simultaneous modifications without blocking.
Note ConcurrentDictionary's default concurrency level is dynamic, scaling according to
the number of processors.
Taken from ConcurrentDictionary.DEFAULT_CAPACITY
The backing dictionary. The values are never used; just the keys.
Construct a concurrent set with the default concurrency level.
Construct a concurrent set using the specified equality comparer.
The equality comparer for values in the set.
Obtain the number of elements in the set.
The number of elements in the set.
Determine whether the set is empty.
true if the set is empty; otherwise, false.
Determine whether the given value is in the set.
The value to test.
true if the set contains the specified value; otherwise, false.
Attempts to add a value to the set.
The value to add.
true if the value was added to the set. If the value already exists, this method returns false.
Attempts to remove a value from the set.
The value to remove.
true if the value was removed successfully; otherwise false.
Clear the set
Obtain an enumerator that iterates through the elements in the set.
An enumerator for the set.
a simple Lisp-like immutable list. Good to use when lists are always accessed from the head.
Names of well-known XML attributes and elements.
Implements a few file name utilities that are needed by the compiler.
In general the compiler is not supposed to understand the format of the paths.
In rare cases it needs to check if a string is a valid file name or change the extension
(embedded resources, netmodules, output name).
The APIs are intentionally limited to cover just these rare cases. Do not add more APIs.
Returns true if the string represents an unqualified file name.
The name may contain any characters but directory and volume separators.
Path.
True if is a simple file name, false if it is null or includes a directory specification.
Returns the offset in where the dot that starts an extension is, or -1 if the path doesn't have an extension.
Returns 0 for path ".goo".
Returns -1 for path "goo.".
Returns an extension of the specified path string.
The same functionality as but doesn't throw an exception
if there are invalid characters in the path.
Removes extension from path.
Returns "goo" for path "goo.".
Returns "goo.." for path "goo...".
Returns path with the extension changed to .
Equivalent of
If is null, returns null.
If path does not end with an extension, the new extension is appended to the path.
If extension is null, equivalent to .
Returns the position in given path where the file name starts.
-1 if path is null.
Get file name from path.
Unlike doesn't check for invalid path characters.
A set that returns the inserted values in insertion order.
The mutation operations are not thread-safe.
Null or empty.
"file"
".\file"
"..\file"
"\dir\file"
"C:dir\file"
"C:\file" or "\\machine" (UNC).
Represents a single item or many items (including none).
Used when a collection usually contains a single item but sometimes might contain multiple.
True if the collection has a single item. This item is stored in .
This class provides simple properties for determining whether the current platform is Windows or Unix-based.
We intentionally do not use System.Runtime.InteropServices.RuntimeInformation.IsOSPlatform(...) because
it incorrectly reports 'true' for 'Windows' in desktop builds running on Unix-based platforms via Mono.
Are we running on .NET 5 or later using the Mono runtime?
Will also return true when running on Mono itself; if necessary
we can use IsRunningOnMono to distinguish.
Attempts to read all of the requested bytes from the stream into the buffer
The number of bytes read. Less than will
only be returned if the end of stream is reached before all bytes can be read.
Unlike it is not guaranteed that
the stream position or the output buffer will be unchanged if an exception is
returned.
Reads all bytes from the current position of the given stream to its end.
This is basically a lossy cache of strings that is searchable by
strings, string sub ranges, character array ranges or string-builder.
Merges the new change ranges into the old change ranges, adjusting the new ranges to be with respect to the original text
(with neither old or new changes applied) instead of with respect to the original text after "old changes" are applied.
This may require splitting, concatenation, etc. of individual change ranges.
Both `oldChanges` and `newChanges` must contain non-overlapping spans in ascending order.
Represents a new change being processed by .
Such a new change must be adjusted before being added to the result list.
A value of this type may represent the intermediate state of merging of an old change into an unadjusted new change,
resulting in a temporary unadjusted new change whose is negative (not valid) until it is adjusted.
This tends to happen when we need to merge an old change deletion into a new change near the beginning of the text. (see TextChangeTests.Fuzz_4)
Resolves relative path and returns absolute path.
The method depends only on values of its parameters and their implementation (for fileExists).
It doesn't itself depend on the state of the current process (namely on the current drive directories) or
the state of file system.
Path to resolve.
Base file path to resolve CWD-relative paths against. Null if not available.
Base directory to resolve CWD-relative paths against if isn't specified.
Must be absolute path.
Null if not available.
Sequence of paths used to search for unqualified relative paths.
Method that tests existence of a file.
The resolved path or null if the path can't be resolved or does not exist.
Normalizes an absolute path.
Path to normalize.
Normalized path.
Used to create a file given a path specified by the user.
paramName - Provided by the Public surface APIs to have a clearer message. Internal API just rethrow the exception
Preferred mechanism to obtain both length and last write time of a file. Querying independently
requires multiple i/o hits which are expensive, even if cached.
True if the character is the platform directory separator character or the alternate directory separator.
True if the character is any recognized directory separator character.
Removes trailing directory separator characters
This will trim the root directory separator:
"C:\" maps to "C:", and "/" maps to ""
Ensures a trailing directory separator character
Get directory name from path.
Unlike it doesn't check for invalid path characters
Prefix of path that represents a directory
Gets the root part of the path.
Gets the specific kind of relative or absolute path.
True if the path is an absolute path (rooted to drive or network share)
Returns true if given path is absolute and starts with a drive specification ("C:\").
Combines an absolute path with a relative.
Absolute root path.
Relative path.
An absolute combined path, or null if is
absolute (e.g. "C:\abc", "\\machine\share\abc"),
relative to the current root (e.g. "\abc"),
or relative to a drive directory (e.g. "C:abc\def").
Combine two paths, the first of which may be absolute.
First path: absolute, relative, or null.
Second path: relative and non-null.
null, if is null; a combined path, otherwise.
Combines paths with the same semantics as
but does not throw on null paths or paths with invalid characters.
First path: absolute, relative, or null.
Second path: absolute, relative, or null.
The combined paths. If contains an absolute path, returns .
Relative and absolute paths treated the same as .
Determines whether an assembly reference is considered an assembly file path or an assembly name.
used, for example, on values of /r and #r.
Determines if "path" contains 'component' within itself.
i.e. asking if the path "c:\goo\bar\baz" has component "bar" would return 'true'.
On the other hand, if you had "c:\goo\bar1\baz" then it would not have "bar" as a
component.
A path contains a component if any file name or directory name in the path
matches 'component'. As such, if you had something like "\\goo" then that would
not have "goo" as a component. That's because here "goo" is the server name portion
of the UNC path, and not an actual directory or file name.
Gets a path relative to a directory.
True if the child path is a child of the parent path.
True if the two paths are the same.
True if the two paths are the same. (but only up to the specified length)
Unfortunately, we cannot depend on Path.GetInvalidPathChars() or Path.GetInvalidFileNameChars()
From MSDN: The array returned from this method is not guaranteed to contain the complete set of characters
that are invalid in file and directory names. The full set of invalid characters can vary by file system.
https://msdn.microsoft.com/en-us/library/system.io.path.getinvalidfilenamechars.aspx
Additionally, Path.GetInvalidPathChars() doesn't include "?" or "*" which are invalid characters,
and Path.GetInvalidFileNameChars() includes ":" and "\" which are valid characters.
The more accurate way is to let the framework parse the path and throw on any errors.
If the current environment uses the '\' directory separator, replaces all uses of '\'
in the given string with '/'. Otherwise, returns the string.
This method is equivalent to Microsoft.CodeAnalysis.BuildTasks.GenerateMSBuildEditorConfig.NormalizeWithForwardSlash
Both methods should be kept in sync.
Replaces all sequences of '\' or '/' with a single '/' but preserves UNC prefix '//'.
Takes an absolute path and attempts to expand any '..' or '.' into their equivalent representation.
An equivalent path that does not contain any '..' or '.' path parts, or the original path.
This method handles unix and windows drive rooted absolute paths only (i.e /a/b or x:\a\b). Passing any other kind of path
including relative, drive relative, unc, or windows device paths will simply return the original input.
Find a instance by first probing the contract name and then the name as it
would exist in mscorlib. This helps satisfy both the CoreCLR and Desktop scenarios.
Defines a set of helper methods to classify Unicode characters.
Returns true if the Unicode character can be a part of an identifier.
The Unicode character.
Check that the name is a valid Unicode identifier.
Returns true if the Unicode character is a formatting character (Unicode class Cf).
The Unicode character.
Returns true if the Unicode character is a formatting character (Unicode class Cf).
The Unicode character.
An that deserializes objects from a byte stream.
We start the version at something reasonably random. That way an older file, with
some random start-bytes, has little chance of matching our version. When incrementing
this version, just change VersionByte2.
Map of reference id's to deserialized strings.
Creates a new instance of a .
The stream to read objects from.
True to leave the open after the is disposed.
Attempts to create a from the provided .
If the does not start with a valid header, then will
be returned.
Creates an from the provided . Unlike , it requires the version of the data in the stream to
exactly match the current format version. Should only be used to read data written by the same version of
Roslyn.
Whether or not the validation bytes (see should be checked immediately at the stream's current
position.
A reference-id to object map, that can share base data efficiently.
An that serializes objects to a byte stream.
byte marker mask for encoding compressed uint
byte marker bits for uint encoded in 1 byte.
byte marker bits for uint encoded in 2 bytes.
byte marker bits for uint encoded in 4 bytes.
Map of serialized string reference ids. The string-reference-map uses value-equality for greater cache hits
and reuse.
This is a mutable struct, and as such is not readonly.
When we write out strings we give each successive, unique, item a monotonically increasing integral ID
starting at 0. I.e. the first string gets ID-0, the next gets ID-1 and so on and so forth. We do *not*
include these IDs with the object when it is written out. We only include the ID if we hit the object
*again* while writing.
During reading, the reader knows to give each string it reads the same monotonically increasing integral
value. i.e. the first string it reads is put into an array at position 0, the next at position 1, and so
on. Then, when the reader reads in a string-reference it can just retrieved it directly from that array.
In other words, writing and reading take advantage of the fact that they know they will write and read
strings in the exact same order. So they only need the IDs for references and not the strings themselves
because the ID is inferred from the order the object is written or read in.
Creates a new instance of a .
The stream to write to.
True to leave the open after the is disposed.
Whether or not the validation bytes (see )
should be immediately written into the stream.
Writes out a special sequence of bytes indicating that the stream is a serialized object stream. Used by the
to be able to easily detect if it is being improperly used, or if the stream is
corrupt.
Used so we can easily grab the low/high 64bits of a guid for serialization.
Only supports values of primitive scaler types. This really should only be used to emit VB preprocessor
symbol values (which are scaler, but untyped as 'object'). Callers which know their value's type should
call into that directly.
Write an array of bytes. The array data is provided as a ReadOnlySpan<>, and deserialized to a byte array.
The array data.
The null value
A string encoded as UTF-8 (using BinaryWriter.Write(string))
A string encoded as UTF16 (as array of UInt16 values)
A reference to a string with the id encoded as 1 byte.
A reference to a string with the id encoded as 2 bytes.
A reference to a string with the id encoded as 4 bytes.
The boolean value true.
The boolean value char.
A character value encoded as 2 bytes.
An Int8 value encoded as 1 byte.
An Int16 value encoded as 2 bytes.
An Int32 value encoded as 4 bytes.
An Int32 value encoded as 1 byte.
An Int32 value encoded as 2 bytes.
The Int32 value 0
The Int32 value 1
The Int32 value 2
The Int32 value 3
The Int32 value 4
The Int32 value 5
The Int32 value 6
The Int32 value 7
The Int32 value 8
The Int32 value 9
The Int32 value 10
An Int64 value encoded as 8 bytes
A UInt8 value encoded as 1 byte.
A UIn16 value encoded as 2 bytes.
A UInt32 value encoded as 4 bytes.
A UInt32 value encoded as 1 byte.
A UInt32 value encoded as 2 bytes.
The UInt32 value 0
The UInt32 value 1
The UInt32 value 2
The UInt32 value 3
The UInt32 value 4
The UInt32 value 5
The UInt32 value 6
The UInt32 value 7
The UInt32 value 8
The UInt32 value 9
The UInt32 value 10
A UInt64 value encoded as 8 bytes.
A float value encoded as 4 bytes.
A double value encoded as 8 bytes.
A decimal value encoded as 12 bytes.
A DateTime value
An array with length encoded as compressed uint
An array with zero elements
An array with one element
An array with 2 elements
An array with 3 elements
Encoding serialized as .
Encoding serialized as .
Encoding serialized as .
An object reference to reference-id map, that can share base data efficiently.
An AnnotationTable helps you attach your own annotation types/instances to syntax.
It maintains a map between your instances and actual SyntaxAnnotation's used to annotate the nodes
and offers an API that matches the true annotation API on SyntaxNode.
The table controls the lifetime of when you can find and retrieve your annotations. You won't be able to
find your annotations via HasAnnotations/GetAnnotations unless you use the same annotation table for these operations
that you used for the WithAdditionalAnnotations operation.
Your custom annotations are not serialized with the syntax tree, so they won't move across boundaries unless the
same AnnotationTable is available on both ends.
also, note that this table is not thread safe.
An AnnotationTable helps you attach your own annotation types/instances to syntax.
It maintains a map between your instances and actual SyntaxAnnotation's used to annotate the nodes
and offers an API that matches the true annotation API on SyntaxNode.
The table controls the lifetime of when you can find and retrieve your annotations. You won't be able to
find your annotations via HasAnnotations/GetAnnotations unless you use the same annotation table for these operations
that you used for the WithAdditionalAnnotations operation.
Your custom annotations are not serialized with the syntax tree, so they won't move across boundaries unless the
same AnnotationTable is available on both ends.
also, note that this table is not thread safe.
Represents a value that can be retrieved synchronously or asynchronously by many clients.
The value will be computed on-demand the moment the first client asks for it. While being
computed, more clients can request the value. As long as there are outstanding clients the
underlying computation will proceed. If all outstanding clients cancel their request then
the underlying value computation will be cancelled as well.
Creators of an can specify whether the result of the computation is
cached for future requests or not. Choosing to not cache means the computation functions are kept
alive, whereas caching means the value (but not functions) are kept alive once complete.
The underlying function that starts an asynchronous computation of the resulting value.
Null'ed out once we've computed the result and we've been asked to cache it. Otherwise,
it is kept around in case the value needs to be computed again.
The underlying function that starts a synchronous computation of the resulting value.
Null'ed out once we've computed the result and we've been asked to cache it, or if we
didn't get any synchronous function given to us in the first place.
The Task that holds the cached result.
Mutex used to protect reading and writing to all mutable objects and fields. Traces indicate that there's
negligible contention on this lock (and on any particular async-lazy in general), hence we can save some
memory by using ourselves as the lock, even though this may inhibit cancellation. Work done while holding
the lock should be kept to a minimum.
The hash set of all currently outstanding asynchronous requests. Null if there are no requests,
and will never be empty.
If an asynchronous request is active, the CancellationTokenSource that allows for
cancelling the underlying computation.
Whether a computation is active or queued on any thread, whether synchronous or
asynchronous.
Creates an AsyncLazy that always returns the value, analogous to .
Creates an AsyncLazy that supports both asynchronous computation and inline synchronous
computation.
A function called to start the asynchronous
computation. This function should be cheap and non-blocking.
A function to do the work synchronously, which
is allowed to block. This function should not be implemented by a simple Wait on the
asynchronous value. If that's all you are doing, just don't pass a synchronous function
in the first place.
Takes the lock for this object and if acquired validates the invariants of this class.
This inherits from to avoid allocating two objects when we can just use one.
The public surface area of should probably be avoided in favor of the public
methods on this class for correct behavior.
The associated with this request. This field will be initialized before
any cancellation is observed from the token.
NOTE: Only use if you truly need a BK-tree. If you just want to compare words, use the 'SpellChecker' type
instead.
An implementation of a Burkhard-Keller tree. Introduced in:
'Some approaches to best-match file searching.' Communications of the ACM CACM Volume 16 Issue 4, April 1973
Pages 230-236 http://dl.acm.org/citation.cfm?doid=362003.362025.
We have three completely flat arrays of structs. These arrays fully represent the BK tree. The structure
is as follows:
The root node is in _nodes[0].
It lists the count of edges it has. These edges are in _edges in the range [0*, childCount). Each edge has
the index of the child node it points to, and the edit distance between the parent and the child.
* of course '0' is only for the root case.
All nodes state where in _edges their child edges range starts, so the children for any node are in the
range[node.FirstEdgeIndex, node.FirstEdgeIndex + node.EdgeCount).
Each node also has an associated string. These strings are concatenated and stored in
_concatenatedLowerCaseWords. Each node has a TextSpan that indicates which portion of the character array
is their string. Note: i'd like to use an immutable array for the characters as well. However, we need to
create slices, and they need to work on top of an ArraySlice (which needs a char[]). The edit distance code
also wants to work on top of raw char[]s (both for speed, and so it can pool arrays to prevent lots of
garbage). Because of that we just keep this as a char[].
Where the child node can be found in .
The string this node corresponds to. Specifically, this span is the range of
for that string.
How many child edges this node has.
Where the first edge can be found in . The edges
are in the range _edges[FirstEdgeIndex, FirstEdgeIndex + EdgeCount)
NOTE: Only use if you truly need an edit distance. If you just want to compare words, use
the 'SpellChecker' type instead.
Implementation of the Damerau-Levenshtein edit distance algorithm from:
An Extension of the String-to-String Correction Problem:
Published in Journal of the ACM (JACM)
Volume 22 Issue 2, April 1975.
Important, unlike many edit distance algorithms out there, this one implements a true metric
that satisfies the triangle inequality. (Unlike the "Optimal String Alignment" or "Restricted
string edit distance" solutions which do not). This means this edit distance can be used in
other domains that require the triangle inequality (like BKTrees).
Specifically, this implementation satisfies the following inequality: D(x, y) + D(y, z) >= D(x, z)
(where D is the edit distance).
NOTE: Only use if you truly need an edit distance. If you just want to compare words, use
the 'SpellChecker' type instead.
Implementation of the Damerau-Levenshtein edit distance algorithm from:
An Extension of the String-to-String Correction Problem:
Published in Journal of the ACM (JACM)
Volume 22 Issue 2, April 1975.
Important, unlike many edit distance algorithms out there, this one implements a true metric
that satisfies the triangle inequality. (Unlike the "Optimal String Alignment" or "Restricted
string edit distance" solutions which do not). This means this edit distance can be used in
other domains that require the triangle inequality (like BKTrees).
Specifically, this implementation satisfies the following inequality: D(x, y) + D(y, z) >= D(x, z)
(where D is the edit distance).
Private implementation we can delegate to for sets.
This must be a different name as overloads are not resolved based on constraints
and would conflict with
A covariant interface form of that lets you re-cast an
to a more base type. This can include types that do not implement if you want to prevent a caller from accidentally
disposing directly.
Gets the target object.
This call is not valid after is called. If this property or the target
object is used concurrently with a call to , it is possible for the code to be
using a disposed object. After the current instance is disposed, this property throws
. However, the exact time when this property starts throwing after
is called is unspecified; code is expected to not use this property or the object
it returns after any code invokes .
The target object.
Increments the reference count for the disposable object, and returns a new disposable reference to it.
The returned object is an independent reference to the same underlying object. Disposing of the
returned value multiple times will only cause the reference count to be decreased once.
A new pointing to the same underlying object, if it
has not yet been disposed; otherwise, if this reference to the underlying object
has already been disposed.
A lightweight mutual exclusion object which supports waiting with cancellation and prevents
recursion (i.e. you may not call Wait if you already hold the lock)
The provides a lightweight mutual exclusion class that doesn't
use Windows kernel synchronization primitives.
The implementation is distilled from the workings of
The basic idea is that we use a regular sync object (Monitor.Enter/Exit) to guard the setting
of an 'owning thread' field. If, during the Wait, we find the lock is held by someone else
then we register a cancellation callback and enter a "Monitor.Wait" loop. If the cancellation
callback fires, then it "pulses" all the waiters to wake them up and check for cancellation.
Waiters are also "pulsed" when leaving the lock.
All public members of are thread-safe and may be used concurrently
from multiple threads.
A synchronization object to protect access to the field and to be pulsed
when is called and during cancellation.
The of the thread that holds the lock. Zero if no thread is holding
the lock.
Constructor.
If false (the default), then the class
allocates an internal object to be used as a sync lock.
If true, then the sync lock object will be the NonReentrantLock instance itself. This
saves an allocation but a client may not safely further use this instance in a call to
Monitor.Enter/Exit or in a "lock" statement.
Shared factory for use in lazy initialization.
Blocks the current thread until it can enter the , while observing a
.
Recursive locking is not supported. i.e. A thread may not call Wait successfully twice without an
intervening .
The token to
observe.
was
canceled.
The caller already holds the lock
Exit the mutual exclusion.
The calling thread must currently hold the lock.
The lock is not currently held by the calling thread.
Determine if the lock is currently held by the calling thread.
True if the lock is currently held by the calling thread.
Throw an exception if the lock is not held by the calling thread.
The lock is not currently held by the calling thread.
Checks if the lock is currently held.
Checks if the lock is currently held by the calling thread.
Take ownership of the lock (by the calling thread). The lock may not already
be held by any other code.
Release ownership of the lock. The lock must already be held by the calling thread.
Action object passed to a cancellation token registration.
Callback executed when a cancellation token is canceled during a Wait.
The syncLock that protects a instance.
Since we want to avoid boxing the return from , this type must be public.
Since we want to avoid boxing the return from , this type must be public.
A reference-counting wrapper which allows multiple uses of a single disposable object in code, which is
deterministically released (by calling ) when the last reference is
disposed.
Each instance of represents a counted reference (also
referred to as a reference in the following documentation) to a target object. Each of these
references has a lifetime, starting when it is constructed and continuing through its release. During
this time, the reference is considered alive. Each reference which is alive owns exactly one
reference to the target object, ensuring that it will not be disposed while still in use. A reference is
released through either of the following actions:
- The reference is explicitly released by a call to .
- The reference is no longer in use by managed code and gets reclaimed by the garbage collector.
While each instance of should be explicitly disposed when
the object is no longer needed by the code owning the reference, this implementation will not leak resources
in the event one or more callers fail to do so. When all references to an object are explicitly released
(i.e. by calling ), the target object will itself be deterministically released by a
call to when the last reference to it is released. However, in the event
one or more references is not explicitly released, the underlying object will still become eligible for
non-deterministic release (i.e. finalization) as soon as each reference to it is released by one of the
two actions described previously.
When using , certain steps must be taken to ensure the
target object is not disposed early.
Use consistently. In other words, do not mix code using
reference-counted wrappers with code that references to the target directly.
Only use the constructor one time per target object.
Additional references to the same target object must only be obtained by calling
.
Do not call on the target object directly. It will be called
automatically at the appropriate time, as described above.
All public methods on this type adhere to their pre- and post-conditions and will not invalidate state
even in concurrent execution.
The type of disposable object.
The target of this reference. This value is initialized to a non- value in the
constructor, and set to when the current reference is disposed.
This value is only cleared in order to support cases where one or more references is garbage
collected without having called.
The boxed reference count, which is shared by all references with the same object.
This field serves as the synchronization object for the current type, since it is shared among all
counted reference to the same target object. Accesses to
should only occur when this object is locked.
PERF DEV NOTE: A concurrent (but complex) implementation of this type with identical semantics is
available in source control history. The use of exclusive locks was not causing any measurable
performance overhead even on 28-thread machines at the time this was written.
Initializes a new reference counting wrapper around an object.
The reference count is initialized to 1.
The object owned by this wrapper.
If is .
Gets the target object.
This call is not valid after is called. If this property or the target
object is used concurrently with a call to , it is possible for the code to be
using a disposed object. After the current instance is disposed, this property throws
. However, the exact time when this property starts throwing after
is called is unspecified; code is expected to not use this property or the object
it returns after any code invokes .
The target object.
Increments the reference count for the disposable object, and returns a new disposable reference to it.
The returned object is an independent reference to the same underlying object. Disposing of the
returned value multiple times will only cause the reference count to be decreased once.
A new pointing to the same underlying object, if it
has not yet been disposed; otherwise, if this reference to the underlying object
has already been disposed.
Provides the implementation for and
.
Releases the current reference, causing the underlying object to be disposed if this was the last
reference.
After this instance is disposed, the method can no longer be used to
obtain a new reference to the target, even if other references to the target object are still in
use.
Represents a weak reference to a which is capable of
obtaining a new counted reference up until the point when the object is no longer accessible.
This value type holds a single field, which is not subject to torn reads/writes.
Increments the reference count for the disposable object, and returns a new disposable reference to
it.
Unlike , this method is capable of
adding a reference to the underlying instance all the way up to the point where it is finally
disposed.
The returned object is an independent reference to the same underlying object. Disposing of
the returned value multiple times will only cause the reference count to be decreased once.
A new pointing to the same underlying object,
if it has not yet been disposed; otherwise, if the underlying object has
already been disposed.
Holds the reference count associated with a disposable object.
Holds the reference count associated with a disposable object.
Holds the weak reference used by instances of to obtain a reference-counted
reference to the original object. This field is initialized the first time a weak reference is obtained
for the instance, and latches in a non-null state once initialized.
DO NOT DISPOSE OF THE TARGET.
Implements a reference-counted cache, where key/value pairs are associated with a count. When the count of a pair goes to zero,
the value is evicted. Values can also be explicitly evicted at any time. In that case, any new calls to
will return a new value, and the existing holders of the evicted value will still dispose it once they're done with it.
Container for a factory.
Factory object that may be used for lazy initialization. Creates AsyncSemaphore instances with an initial count of 1.
TODO: remove this exception: https://github.com/dotnet/roslyn/issues/40476
this represents soft crash request compared to hard crash which will bring down VS.
by soft crash, it means everything same as hard crash except it should use NFW and info bar
to inform users about unexpected condition instead of killing VS as traditional crash did.
in other words, no one should ever try to recover from this exception. but they must try to not hard crash.
this exception is based on cancellation exception since, in Roslyn code, cancellation exception is so far
only safest exception to throw without worrying about crashing VS 99%. there is still 1% case it will bring
down VS and those places should be guarded on this exception as we find such place.
for now, this is an opt-in based. if a feature wants to move to soft crash (ex, OOP), one should catch
exception and translate that to this exception and then add handler which report NFW and info bar in their
code path and make sure it doesn't bring down VS.
as we use soft-crash in more places, we should come up with more general framework.
This helper method provides semantics equivalent to the following, but avoids throwing an intermediate
in the case where the asynchronous operation is cancelled.
MethodAsync(TArg arg, CancellationToken cancellationToken)
{
var intermediate = await func(arg, cancellationToken).ConfigureAwait(false);
return transform(intermediate);
}
]]>
This helper method is only intended for use in cases where profiling reveals substantial overhead related to
cancellation processing.
The type of a state variable to pass to and .
The type of intermediate result produced by .
The type of result produced by .
The intermediate asynchronous operation.
The synchronous transformation to apply to the result of .
The state to pass to and .
The that the operation will observe.
Asserts the passed has already been completed.
This is useful for a specific case: sometimes you might be calling an API that is "sometimes" async, and you're
calling it from a synchronous method where you know it should have completed synchronously. This is an easy
way to assert that while silencing any compiler complaints.
Whether or words should be considered similar if one is contained within the other
(regardless of edit distance). For example if is true then IService would be considered
similar to IServiceFactory despite the edit distance being quite high at 7.
Returns true if 'originalText' and 'candidateText' are likely a misspelling of each other.
Returns false otherwise. If it is a likely misspelling a similarityWeight is provided
to help rank the match. Lower costs mean it was a better match.
Stores the "path" from the root of a tree to a node, allowing the node to be recovered in a
later snapshot of the tree, under certain circumstances.
The implementation stores the child indices to represent the path, so any edit which affects
the child indices could render this object unable to recover its node. NOTE: One thing C#
IDE has done in the past to do a better job of this is to store the fully qualified name of
the member to at least be able to descend into the same member. We could apply the same sort
of logic here.
Attempts to recover the node at this path in the provided tree. If the node is found
then 'true' is returned, otherwise the result is 'false' and 'node' will be null.
Asserts the passed has already been completed.
This is useful for a specific case: sometimes you might be calling an API that is "sometimes" async, and you're
calling it from a synchronous method where you know it should have completed synchronously. This is an easy
way to assert that while silencing any compiler complaints.
Asserts the passed has already been completed.
This is useful for a specific case: sometimes you might be calling an API that is "sometimes" async, and you're
calling it from a synchronous method where you know it should have completed synchronously. This is an easy
way to assert that while silencing any compiler complaints.
Creates an event handler that holds onto the target weakly.
The target that is held weakly, and passed as an argument to the invoker.
An action that will receive the event arguments as well as the target instance.
The invoker itself must not capture any state.
Indicates that a code element is performance sensitive under a known scenario.
When applying this attribute, only explicitly set the values for properties specifically indicated by the
test/measurement technique described in the associated .
Gets the location where the original problem is documented, likely with steps to reproduce the issue and/or
validate performance related to a change in the method.
Gets or sets a description of the constraint imposed by the original performance issue.
Constraints are normally specified by other specific properties that allow automated validation of the
constraint. This property supports documenting constraints which cannot be described in terms of other
constraint properties.
Gets or sets a value indicating whether captures are allowed.
Gets or sets a value indicating whether implicit boxing of value types is allowed.
Gets or sets a value indicating whether enumeration of a generic
is allowed.
Gets or sets a value indicating whether locks are allowed.
Gets or sets a value indicating whether the asynchronous state machine typically completes synchronously.
When , validation of this performance constraint typically involves analyzing
the method to ensure synchronous completion of the state machine does not require the allocation of a
, either through caching the result or by using
.
Gets or sets a value indicating whether this is an entry point to a parallel algorithm.
Parallelization APIs and algorithms, e.g. Parallel.ForEach, may be efficient for parallel entry
points (few direct calls but large amounts of iterative work), but are problematic when called inside the
iterations themselves. Performance-sensitive code should avoid the use of heavy parallelization APIs except
for known entry points to the parallel portion of code.
Represents a non-cryptographic hash algorithm.
Gets the number of bytes produced from this hash algorithm.
The number of bytes produced from this hash algorithm.
Called from constructors in derived classes to initialize the
class.
The number of bytes produced from this hash algorithm.
is less than 1.
When overridden in a derived class,
appends the contents of to the data already
processed for the current hash computation.
The data to process.
When overridden in a derived class,
resets the hash computation to the initial state.
When overridden in a derived class,
writes the computed hash value to
without modifying accumulated state.
The buffer that receives the computed hash value.
Implementations of this method must write exactly
bytes to .
Do not assume that the buffer was zero-initialized.
The class validates the
size of the buffer before calling this method, and slices the span
down to be exactly in length.
Appends the contents of to the data already
processed for the current hash computation.
The data to process.
is .
Appends the contents of to the data already
processed for the current hash computation.
The data to process.
is .
Asychronously reads the contents of
and appends them to the data already
processed for the current hash computation.
The data to process.
The token to monitor for cancellation requests.
The default value is .
A task that represents the asynchronous append operation.
is .
Gets the current computed hash value without modifying accumulated state.
The hash value for the data already provided.
Attempts to write the computed hash value to
without modifying accumulated state.
The buffer that receives the computed hash value.
On success, receives the number of bytes written to .
if is long enough to receive
the computed hash value; otherwise, .
Writes the computed hash value to
without modifying accumulated state.
The buffer that receives the computed hash value.
The number of bytes written to ,
which is always .
is shorter than .
Gets the current computed hash value and clears the accumulated state.
The hash value for the data already provided.
Attempts to write the computed hash value to .
If successful, clears the accumulated state.
The buffer that receives the computed hash value.
On success, receives the number of bytes written to .
and clears the accumulated state
if is long enough to receive
the computed hash value; otherwise, .
Writes the computed hash value to
then clears the accumulated state.
The buffer that receives the computed hash value.
The number of bytes written to ,
which is always .
is shorter than .
Writes the computed hash value to
then clears the accumulated state.
The buffer that receives the computed hash value.
Implementations of this method must write exactly
bytes to .
Do not assume that the buffer was zero-initialized.
The class validates the
size of the buffer before calling this method, and slices the span
down to be exactly in length.
The default implementation of this method calls
followed by .
Overrides of this method do not need to call either of those methods,
but must ensure that the caller cannot observe a difference in behavior.
This method is not supported and should not be called.
Call or
instead.
This method will always throw a .
In all cases.
Provides an implementation of the XXH128 hash algorithm for generating a 128-bit hash.
For methods that persist the computed numerical hash value as bytes,
the value is written in the Big Endian byte order.
XXH128 produces 16-byte hashes.
Initializes a new instance of the class using the default seed value 0.
Initializes a new instance of the class using the specified seed.
Computes the XXH128 hash of the provided data.
The data to hash.
The XXH128 128-bit hash code of the provided data.
is null.
Computes the XXH128 hash of the provided data using the provided seed.
The data to hash.
The seed value for this hash computation.
The XXH128 128-bit hash code of the provided data.
is null.
Computes the XXH128 hash of the provided data using the optionally provided .
The data to hash.
The seed value for this hash computation. The default is zero.
The XXH128 128-bit hash code of the provided data.
Computes the XXH128 hash of the provided data into the provided using the optionally provided .
The data to hash.
The buffer that receives the computed 128-bit hash code.
The seed value for this hash computation. The default is zero.
The number of bytes written to .
is shorter than (16 bytes).
Attempts to compute the XXH128 hash of the provided data into the provided using the optionally provided .
The data to hash.
The buffer that receives the computed 128-bit hash code.
When this method returns, contains the number of bytes written to .
The seed value for this hash computation. The default is zero.
if is long enough to receive the computed hash value (16 bytes); otherwise, .
Resets the hash computation to the initial state.
Appends the contents of to the data already processed for the current hash computation.
The data to process.
Writes the computed 128-bit hash value to without modifying accumulated state.
The buffer that receives the computed hash value.
The default secret for when no seed is provided.
This is the same as a custom secret derived from a seed of 0.
Calculates a 32-bit to 64-bit long multiply.
"This is a fast avalanche stage, suitable when input bits are already partially mixed."
Calculates a 64-bit to 128-bit multiply, then XOR folds it.
Optimized version of looping over .
The accumulators. Length is .
Used to store a custom secret generated from a seed. Length is .
The internal buffer. Length is .
The amount of memory in .
Number of stripes processed in the current block.
Total length hashed.
The seed employed (possibly 0).
Declare the following extension methods in System.Linq namespace to avoid accidental boxing of ImmutableArray{T} that implements IEnumerable{T}.
The boxing would occur if the methods were defined in Roslyn.Utilities and the file calling these methods has using Roslyn.Utilities
but not using System.Linq.
Represent a type can be used to index a collection either from the start or the end.
Index is used by the C# compiler to support the new index syntax
int[] someArray = new int[5] { 1, 2, 3, 4, 5 } ;
int lastElement = someArray[^1]; // lastElement = 5
Construct an Index using a value and indicating if the index is from the start or from the end.
The index value. it has to be zero or positive number.
Indicating if the index is from the start or from the end.
If the Index constructed from the end, index value 1 means pointing at the last element and index value 0 means pointing at beyond last element.
Create an Index pointing at first element.
Create an Index pointing at beyond last element.
Create an Index from the start at the position indicated by the value.
The index value from the start.
Create an Index from the end at the position indicated by the value.
The index value from the end.
Returns the index value.
Indicates whether the index is from the start or the end.
Calculate the offset from the start using the giving collection length.
The length of the collection that the Index will be used with. length has to be a positive value
For performance reason, we don't validate the input length parameter and the returned offset value against negative values.
we don't validate either the returned offset is greater than the input length.
It is expected Index will be used with collections which always have non negative length/count. If the returned offset is negative and
then used to index a collection will get out of range exception which will be same affect as the validation.
Indicates whether the current Index object is equal to another object of the same type.
An object to compare with this object
Indicates whether the current Index object is equal to another Index object.
An object to compare with this object
Returns the hash code for this instance.
Converts integer number to an Index.
Converts the value of the current Index object to its equivalent string representation.
Represent a range has start and end indexes.
Range is used by the C# compiler to support the range syntax.
int[] someArray = new int[5] { 1, 2, 3, 4, 5 };
int[] subArray1 = someArray[0..2]; // { 1, 2 }
int[] subArray2 = someArray[1..^0]; // { 2, 3, 4, 5 }
Represent the inclusive start index of the Range.
Represent the exclusive end index of the Range.
Construct a Range object using the start and end indexes.
Represent the inclusive start index of the range.
Represent the exclusive end index of the range.
Indicates whether the current Range object is equal to another object of the same type.
An object to compare with this object
Indicates whether the current Range object is equal to another Range object.
An object to compare with this object
Returns the hash code for this instance.
Converts the value of the current Range object to its equivalent string representation.
Create a Range object starting from start index to the end of the collection.
Create a Range object starting from first element in the collection to the end Index.
Create a Range object starting from first element to the end.
Calculate the start offset and length of range object using a collection length.
The length of the collection that the range will be used with. length has to be a positive value.
For performance reason, we don't validate the input length parameter against negative values.
It is expected Range will be used with collections which always have non negative length/count.
We validate the range is inside the length scope though.
Represents a Unicode scalar value ([ U+0000..U+D7FF ], inclusive; or [ U+E000..U+10FFFF ], inclusive).
This type's constructors and conversion operators validate the input, so consumers can call the APIs
assuming that the underlying instance is well-formed.
Creates a from the provided UTF-16 code unit.
If represents a UTF-16 surrogate code point
U+D800..U+DFFF, inclusive.
Creates a from the provided UTF-16 surrogate pair.
If does not represent a UTF-16 high surrogate code point
or does not represent a UTF-16 low surrogate code point.
Creates a from the provided Unicode scalar value.
If does not represent a value Unicode scalar value.
Creates a from the provided Unicode scalar value.
If does not represent a value Unicode scalar value.
Returns true if and only if this scalar value is ASCII ([ U+0000..U+007F ])
and therefore representable by a single UTF-8 code unit.
Returns true if and only if this scalar value is within the BMP ([ U+0000..U+FFFF ])
and therefore representable by a single UTF-16 code unit.
Returns the Unicode plane (0 to 16, inclusive) which contains this scalar.
A instance that represents the Unicode replacement character U+FFFD.
Returns the length in code units () of the
UTF-16 sequence required to represent this scalar value.
The return value will be 1 or 2.
Returns the length in code units of the
UTF-8 sequence required to represent this scalar value.
The return value will be 1 through 4, inclusive.
Returns the Unicode scalar value as an integer.
Decodes the at the beginning of the provided UTF-16 source buffer.
If the source buffer begins with a valid UTF-16 encoded scalar value, returns ,
and outs via the decoded and via the
number of s used in the input buffer to encode the .
If the source buffer is empty or contains only a standalone UTF-16 high surrogate character, returns ,
and outs via and via the length of the input buffer.
If the source buffer begins with an ill-formed UTF-16 encoded scalar value, returns ,
and outs via and via the number of
s used in the input buffer to encode the ill-formed sequence.
The general calling convention is to call this method in a loop, slicing the buffer by
elements on each iteration of the loop. On each iteration of the loop
will contain the real scalar value if successfully decoded, or it will contain if
the data could not be successfully decoded. This pattern provides convenient automatic U+FFFD substitution of
invalid sequences while iterating through the loop.
Decodes the at the beginning of the provided UTF-8 source buffer.
If the source buffer begins with a valid UTF-8 encoded scalar value, returns ,
and outs via the decoded and via the
number of s used in the input buffer to encode the .
If the source buffer is empty or contains only a partial UTF-8 subsequence, returns ,
and outs via and via the length of the input buffer.
If the source buffer begins with an ill-formed UTF-8 encoded scalar value, returns ,
and outs via and via the number of
s used in the input buffer to encode the ill-formed sequence.
The general calling convention is to call this method in a loop, slicing the buffer by
elements on each iteration of the loop. On each iteration of the loop
will contain the real scalar value if successfully decoded, or it will contain if
the data could not be successfully decoded. This pattern provides convenient automatic U+FFFD substitution of
invalid sequences while iterating through the loop.
Decodes the at the end of the provided UTF-16 source buffer.
This method is very similar to , but it allows
the caller to loop backward instead of forward. The typical calling convention is that on each iteration
of the loop, the caller should slice off the final elements of
the buffer.
Decodes the at the end of the provided UTF-8 source buffer.
This method is very similar to , but it allows
the caller to loop backward instead of forward. The typical calling convention is that on each iteration
of the loop, the caller should slice off the final elements of
the buffer.
Encodes this to a UTF-16 destination buffer.
The buffer to which to write this value as UTF-16.
The number of s written to .
If is not large enough to hold the output.
Encodes this to a UTF-8 destination buffer.
The buffer to which to write this value as UTF-8.
The number of s written to .
If is not large enough to hold the output.
Gets the which begins at index in
string .
Throws if is null, if is out of range, or
if does not reference the start of a valid scalar value within .
Returns if and only if is a valid Unicode scalar
value, i.e., is in [ U+0000..U+D7FF ], inclusive; or [ U+E000..U+10FFFF ], inclusive.
Returns if and only if is a valid Unicode scalar
value, i.e., is in [ U+0000..U+D7FF ], inclusive; or [ U+E000..U+10FFFF ], inclusive.
Returns a representation of this instance.
Attempts to create a from the provided input value.
Attempts to create a from the provided UTF-16 surrogate pair.
Returns if the input values don't represent a well-formed UTF-16surrogate pair.
Attempts to create a from the provided input value.
Attempts to create a from the provided input value.
Encodes this to a UTF-16 destination buffer.
The buffer to which to write this value as UTF-16.
The number of s written to ,
or 0 if the destination buffer is not large enough to contain the output.
True if the value was written to the buffer; otherwise, false.
The property can be queried ahead of time to determine
the required size of the buffer.
Encodes this to a destination buffer as UTF-8 bytes.
The buffer to which to write this value as UTF-8.
The number of s written to ,
or 0 if the destination buffer is not large enough to contain the output.
True if the value was written to the buffer; otherwise, false.
The property can be queried ahead of time to determine
the required size of the buffer.
Attempts to get the which begins at index in
string .
if a scalar value was successfully extracted from the specified index,
if a value could not be extracted due to invalid data.
Throws only if is null or is out of range.
Creates a without performing validation on the input.
Formats a code point as the hex string "U+XXXX".
The input value doesn't have to be a real code point in the Unicode codespace. It can be any integer.
The Unicode replacement character U+FFFD.
Returns the Unicode plane (0 through 16, inclusive) which contains this code point.
Returns a Unicode scalar value from two code points representing a UTF-16 surrogate pair.
Given a Unicode scalar value, gets the number of UTF-16 code units required to represent this value.
Decomposes an astral Unicode scalar into UTF-16 high and low surrogate code units.
Given a Unicode scalar value, gets the number of UTF-8 code units required to represent this value.
Returns if and only if is an ASCII
character ([ U+0000..U+007F ]).
Per http://www.unicode.org/glossary/#ASCII, ASCII is only U+0000..U+007F.
Returns if and only if is in the
Basic Multilingual Plane (BMP).
Returns if and only if is a UTF-16 high surrogate code point,
i.e., is in [ U+D800..U+DBFF ], inclusive.
Returns if and only if is between
and , inclusive.
Returns if and only if is a UTF-16 low surrogate code point,
i.e., is in [ U+DC00..U+DFFF ], inclusive.
Returns if and only if is a UTF-16 surrogate code point,
i.e., is in [ U+D800..U+DFFF ], inclusive.
Returns if and only if is a valid Unicode code
point, i.e., is in [ U+0000..U+10FFFF ], inclusive.
Returns if and only if is a valid Unicode scalar
value, i.e., is in [ U+0000..U+D7FF ], inclusive; or [ U+E000..U+10FFFF ], inclusive.
Returns true iff the UInt32 represents two ASCII UTF-16 characters in machine endianness.
Returns true iff the UInt64 represents four ASCII UTF-16 characters in machine endianness.
Given a UInt32 that represents two ASCII UTF-16 characters, returns the invariant
lowercase representation of those characters. Requires the input value to contain
two ASCII UTF-16 characters in machine endianness.
This is a branchless implementation.
Given a UInt32 that represents two ASCII UTF-16 characters, returns the invariant
uppercase representation of those characters. Requires the input value to contain
two ASCII UTF-16 characters in machine endianness.
This is a branchless implementation.
Given a UInt32 that represents two ASCII UTF-16 characters, returns true iff
the input contains one or more lowercase ASCII characters.
This is a branchless implementation.
Given a UInt32 that represents two ASCII UTF-16 characters, returns true iff
the input contains one or more uppercase ASCII characters.
This is a branchless implementation.
Given two UInt32s that represent two ASCII UTF-16 characters each, returns true iff
the two inputs are equal using an ordinal case-insensitive comparison.
This is a branchless implementation.
Given two UInt64s that represent four ASCII UTF-16 characters each, returns true iff
the two inputs are equal using an ordinal case-insensitive comparison.
This is a branchless implementation.
Indicates that compiler support for a particular feature is required for the location where this attribute is applied.
The name of the compiler feature.
If true, the compiler can choose to allow access to the location where this attribute is applied if it does not understand .
The used for the ref structs C# feature.
The used for the required members C# feature.
Indicates which arguments to a method involving an interpolated string handler should be passed to that handler.
Initializes a new instance of the class.
The name of the argument that should be passed to the handler.
may be used as the name of the receiver in an instance method.
Initializes a new instance of the class.
The names of the arguments that should be passed to the handler.
may be used as the name of the receiver in an instance method.
Gets the names of the arguments that should be passed to the handler.
may be used as the name of the receiver in an instance method.
Indicates the attributed type is to be used as an interpolated string handler.
Initializes the .
Reserved to be used by the compiler for tracking metadata.
This class should not be used by developers in source code.
Specifies that a type has required members or that a member is required.
Indicates that an API is experimental and it may change in the future.
This attribute allows call sites to be flagged with a diagnostic that indicates that an experimental
feature is used. Authors can use this attribute to ship preview features in their assemblies.
Initializes a new instance of the class, specifying the ID that the compiler will use
when reporting a use of the API the attribute applies to.
The ID that the compiler will use when reporting a use of the API the attribute applies to.
Gets the ID that the compiler will use when reporting a use of the API the attribute applies to.
The unique diagnostic ID.
The diagnostic ID is shown in build output for warnings and errors.
This property represents the unique ID that can be used to suppress the warnings or errors, if needed.
Gets or sets the URL for corresponding documentation.
The API accepts a format string instead of an actual URL, creating a generic URL that includes the diagnostic ID.
The format string that represents a URL to corresponding documentation.
An example format string is https://contoso.com/obsoletion-warnings/{0}.
Specifies that null is allowed as an input even if the corresponding type disallows it.
Specifies that null is disallowed as an input even if the corresponding type allows it.
Specifies that an output may be null even if the corresponding type disallows it.
Specifies that an output will not be null even if the corresponding type allows it.
Specifies that when a method returns , the parameter may be null even if the corresponding type disallows it.
Initializes the attribute with the specified return value condition.
The return value condition. If the method returns this value, the associated parameter may be null.
Gets the return value condition.
Specifies that when a method returns , the parameter will not be null even if the corresponding type allows it.
Initializes the attribute with the specified return value condition.
The return value condition. If the method returns this value, the associated parameter will not be null.
Gets the return value condition.
Specifies that the output will be non-null if the named parameter is non-null.
Initializes the attribute with the associated parameter name.
The associated parameter name. The output will be non-null if the argument to the parameter specified is non-null.
Gets the associated parameter name.
Applied to a method that will never return under any circumstance.
Specifies that the method will not return if the associated Boolean parameter is passed the specified value.
Initializes the attribute with the specified parameter value.
The condition parameter value. Code after the method will be considered unreachable by diagnostics if the argument to
the associated parameter matches this value.
Gets the condition parameter value.
Specifies that the method or property will ensure that the listed field and property members have not-null values.
Initializes the attribute with a field or property member.
The field or property member that is promised to be not-null.
Initializes the attribute with the list of field and property members.
The list of field and property members that are promised to be not-null.
Gets field or property member names.
Specifies that the method or property will ensure that the listed field and property members have not-null values when returning with the specified return value condition.
Initializes the attribute with the specified return value condition and a field or property member.
The return value condition. If the method returns this value, the associated parameter will not be null.
The field or property member that is promised to be not-null.
Initializes the attribute with the specified return value condition and list of field and property members.
The return value condition. If the method returns this value, the associated parameter will not be null.
The list of field and property members that are promised to be not-null.
Gets the return value condition.
Gets field or property member names.
Specifies that this constructor sets all required members for the current type, and callers
do not need to set any required members themselves.
Provides a readonly abstraction of a set.
The type of elements in the set.
Determines if the set contains a specific item
The item to check if the set contains.
if found; otherwise .
Determines whether the current set is a proper (strict) subset of a specified collection.
The collection to compare to the current set.
if the current set is a proper subset of other; otherwise .
other is .
Determines whether the current set is a proper (strict) superset of a specified collection.
The collection to compare to the current set.
if the collection is a proper superset of other; otherwise .
other is .
Determine whether the current set is a subset of a specified collection.
The collection to compare to the current set.
if the current set is a subset of other; otherwise .
other is .
Determine whether the current set is a super set of a specified collection.
The collection to compare to the current set
if the current set is a subset of other; otherwise .
other is .
Determines whether the current set overlaps with the specified collection.
The collection to compare to the current set.
if the current set and other share at least one common element; otherwise, .
other is .
Determines whether the current set and the specified collection contain the same elements.
The collection to compare to the current set.
if the current set is equal to other; otherwise, .
other is .
A bare-bones array builder, focused on the case of producing s where the final array
size is known at construction time. In the golden path, where all the expected items are added to the builder, and
is called, this type is entirely garbage free. In the non-golden path (usually
encountered when a cancellation token interrupts getting the final array), this will leak the intermediary array
created to store the results.
This type should only be used when all of the following are true:
-
The number of elements is known up front, and is fixed. In other words, it isn't just an initial-capacity, or a
rough heuristic. Rather it will always be the exact number of elements added.
-
Exactly that number of elements is actually added prior to calling . This means no
patterns like "AddIfNotNull".
-
The builder will be moved to an array (see ) or (see ).
If any of the above are not true (for example, the capacity is a rough hint, or the exact number of elements may not
match the capacity specified, or if it's intended as a scratch buffer, and won't realize a final array), then should be used instead.
A bare-bones array builder, focused on the case of producing s where the final array
size is known at construction time. In the golden path, where all the expected items are added to the builder, and
is called, this type is entirely garbage free. In the non-golden path (usually
encountered when a cancellation token interrupts getting the final array), this will leak the intermediary array
created to store the results.
This type should only be used when all of the following are true:
-
The number of elements is known up front, and is fixed. In other words, it isn't just an initial-capacity, or a
rough heuristic. Rather it will always be the exact number of elements added.
-
Exactly that number of elements is actually added prior to calling . This means no
patterns like "AddIfNotNull".
-
The builder will be moved to an array (see ) or (see ).
If any of the above are not true (for example, the capacity is a rough hint, or the exact number of elements may not
match the capacity specified, or if it's intended as a scratch buffer, and won't realize a final array), then should be used instead.
Moves the underlying buffer out of control of this type, into the returned . It
is an error for a client of this type to specify a capacity and then attempt to call without that number of elements actually having been added to the builder. This will
throw if attempted. This is effectively unusable once this is called.
The internal buffer will reset to an empty array, meaning no more items could ever be added to it.