diff --git a/guides/execution/batching.md b/guides/execution/batching.md index 6ec415f591..784f7e5b6a 100644 --- a/guides/execution/batching.md +++ b/guides/execution/batching.md @@ -233,29 +233,25 @@ One schema can run _both_ legacy execution and batching execution. This enable a Performance improvements in batching execution come at the cost of removing support for many "nice-to-have" features in GraphQL-Ruby by default. Those features are addressed here. -### Query Analyzers, including complexity 🌕 +### Query Analyzers, including complexity ✅ Support is identical; this runs before execution using the exact same code. TODO: accessing loaded arguments inside analzyers may turn out to be slightly different; it still calls legacy code. -### Authorization, Scoping 🌕 +### Authorization, Scoping ✅ -Full compatibility, but the internal code which determines _when_ it should be called is still slow and clunky. - -- [x] Objects -- [x] Fields -- [x] Arguments -- [x] Resolvers -- [ ] TODO: improve detection/opt in for this feature +Full compatibility. `def (self.)authorized?` and `def self.scope_items` will be called as needed during execution. ### Visibility, including Changesets ✅ -Visibility works exactly as before; both runtime modules call the same method to get type information from the schema. +Visibility works exactly as before; both runtime modules call the same methods to get type information from the schema. + +### Dataloader ✅ -### Dataloader 🌕 +Dataloader runs with new execution, but batching -Dataloader _works_ but batching behavior is different in some cases. TODO document those cases, consider better future compatibility. +TODO document those cases, consider better future compatibility. ### Tracing ✅ @@ -273,13 +269,15 @@ Right now, lazy result from `resolve_type` is tied up in authorization compat sh ### `current_path` ❌ -TODO: not supported yet because the new runtime module doesn't actually product `current_path` while it's running. I think it's possible to support it though. +This is not supported because the new runtime doesn't actually produce `current_path`. + +It is theoretically possible to support this but it will be a ton of work. If you use this for core runtime functions, please share your use case in a GitHub issue and we can investigate future options. ### `@defer` and `@stream` ❌ This depends on `current_path` so isn't possible yet. -### Caching ❌ +### ObjectCache ❌ Actually this probably works but I haven't tested it. @@ -299,17 +297,22 @@ Possible but not implemented. Legacy support is implemented I believe. Partial support is possible, `obj` will not be given to `validates:` anymore maybe? -### Field Extensions ❌ +### Field Extensions ✅ + +Field extensions _are_ called, but it uses new methods: -Maybe this will be possible to support but with `objects` instead of `object` given to the hook. Change the hook name to `resolve_batch`? +- `def resolve_batching(objects:, arguments:, context:, &block)` receives `objects:` instead of `object:` and should yield them to the given block to continue execution +- `def after_resolve_batching(objects:, arguments:, context:, values:, memo:)` receives `objects:, values:, ...` instead of `object:, value:, ...` and should return an Array of results (isntead of a single result value). + +Because of their close integration with the runtime, `ConnectionExtension` and `ScopeExtension` don't actually use `after_resolve_batching`. Instead, support is hard-coded inside the runtime. This might be a smell that field extensions aren't worth supporting. ### Resolver classes (including Mutations and Subscriptions) ❌ This should be supported somehow; legacy support is present now -### Field `extras:`, including `lookahead` ❌ +### Field `extras:`, including `lookahead` ✅ -TODO support here is possible but not implemented. Legacy support is implemented but should be extracted to an opt-in thing. +`:ast_node` and `:lookahead` are already implemented. Others are possible -- please raise an issue if you need one. `extras: [:current_path]` is not possible. ### `raw_value` ❌ @@ -323,13 +326,9 @@ TODO: support is possible here but not tested - raising GraphQL::ExecutionError - Schema class error handling hooks -### Connection fields ❌ - -TODO -- make this better. - -Currently, argument definitions _are_ added to the field when a connection type is used as a return type. +### Connection fields ✅ -But arguments are not automatically hidden from the resolver and Connection wrappers are not automatically applied. Should they be? +Connection arguments are automatically handled and connection wrapper objects are automatically applied to arrays and relations. ### Custom Introspection ✅ diff --git a/lib/graphql/execution/batching/field_compatibility.rb b/lib/graphql/execution/batching/field_compatibility.rb index 8d474dc47a..ddded9e3a6 100644 --- a/lib/graphql/execution/batching/field_compatibility.rb +++ b/lib/graphql/execution/batching/field_compatibility.rb @@ -57,12 +57,10 @@ def resolve_batch(frs, objects, context, kwargs) if dynamic_introspection obj_inst = @owner.wrap(obj_inst, context) end - results << with_extensions(obj_inst, kwargs, context) do |obj, ruby_kwargs| - if ruby_kwargs.empty? - obj.public_send(@resolver_method) - else - obj.public_send(@resolver_method, **ruby_kwargs) - end + results << if kwargs.empty? + obj_inst.public_send(@resolver_method) + else + obj_inst.public_send(@resolver_method, **kwargs) end end end @@ -75,23 +73,20 @@ def resolve_batch(frs, objects, context, kwargs) if maybe_err next maybe_err end - resolver_inst_kwargs = if @resolver_class < Schema::HasSingleInputArgument + ruby_kwargs = if @resolver_class < Schema::HasSingleInputArgument resolver_inst_kwargs[:input] else resolver_inst_kwargs end - with_extensions(o, resolver_inst_kwargs, context) do |obj, ruby_kwargs| - resolver_inst.object = obj - resolver_inst.prepared_arguments = ruby_kwargs - is_authed, new_return_value = resolver_inst.authorized?(**ruby_kwargs) - if frs.runner.resolves_lazies && frs.runner.schema.lazy?(is_authed) - is_authed, new_return_value = frs.runner.schema.sync_lazy(is_authed) - end - if is_authed - resolver_inst.call_resolve(ruby_kwargs) - else - new_return_value - end + resolver_inst.prepared_arguments = ruby_kwargs + is_authed, new_return_value = resolver_inst.authorized?(**ruby_kwargs) + if frs.runner.resolves_lazies && frs.runner.schema.lazy?(is_authed) + is_authed, new_return_value = frs.runner.schema.sync_lazy(is_authed) + end + if is_authed + resolver_inst.call_resolve(ruby_kwargs) + else + new_return_value end rescue RuntimeError => err err @@ -107,24 +102,7 @@ def resolve_batch(frs, objects, context, kwargs) elsif objects.first.is_a?(Interpreter::RawValue) objects else - # need to use connection extension if present, and extensions expect object type instances - if extensions.empty? - objects.map { |o| o.public_send(@method_sym)} - else - results = [] - frs.selections_step.graphql_objects.each_with_index do |obj_inst, idx| - if frs.object_is_authorized[idx] - results << with_extensions(obj_inst, EmptyObjects::EMPTY_HASH, context) do |obj, arguments| - if arguments.empty? - obj.object.public_send(@method_sym) - else - obj.object.public_send(@method_sym, **arguments) - end - end - end - end - results - end + objects.map { |o| o.public_send(@method_sym)} end end end diff --git a/lib/graphql/execution/batching/field_resolve_step.rb b/lib/graphql/execution/batching/field_resolve_step.rb index aba8ec7a51..d1ebe7adae 100644 --- a/lib/graphql/execution/batching/field_resolve_step.rb +++ b/lib/graphql/execution/batching/field_resolve_step.rb @@ -20,9 +20,12 @@ def initialize(parent_type:, runner:, key:, selections_step:) @static_type = nil @next_selections = nil @object_is_authorized = nil + @finish_extension_idx = nil + @was_scoped = nil end - attr_reader :ast_node, :key, :parent_type, :selections_step, :runner, :field_definition, :object_is_authorized, :arguments + attr_reader :ast_node, :key, :parent_type, :selections_step, :runner, + :field_definition, :object_is_authorized, :arguments, :was_scoped def path @path ||= [*@selections_step.path, @key].freeze @@ -147,11 +150,20 @@ def sync(lazy) def call if @enqueued_authorization && @pending_authorize_steps_count == 0 enqueue_next_steps + elsif @finish_extension_idx + finish_extensions elsif @field_results build_results else execute_field end + rescue StandardError => err + if @field_definition && !err.message.start_with?("Resolving ") + # TODO remove this check ^^^^^^ when NullDataloader isn't recursive + raise err, "Resolving #{@field_definition.path}: #{err.message}", err.backtrace + else + raise + end end def add_graphql_error(err) @@ -181,6 +193,7 @@ def execute_field end if @field_definition.dynamic_introspection + # TODO break this backwards compat somehow? objects = @selections_step.graphql_objects end @@ -202,7 +215,7 @@ def execute_field end @arguments[:ast_node] = ast_node else - raise ArgumentError, "This extra isn't supported yet: #{extra.inspect}. Open an issue on GraphQL-Ruby to add compatibility for it." + raise ArgumentError, "This `extra` isn't supported yet: #{extra.inspect}. Open an issue on GraphQL-Ruby to add compatibility for it." end end @@ -222,12 +235,40 @@ def execute_field @object_is_authorized = AlwaysAuthorized end + if @parent_type.default_relay? && authorized_objects.all? { |o| o.respond_to?(:was_authorized_by_scope_items?) && o.was_authorized_by_scope_items? } + @was_scoped = true + end + query.current_trace.begin_execute_field(@field_definition, @arguments, authorized_objects, query) - @field_results = @field_definition.resolve_batch(self, authorized_objects, ctx, @arguments) + has_extensions = @field_definition.extensions.size > 0 + if has_extensions + @extended = GraphQL::Schema::Field::ExtendedState.new(@arguments, authorized_objects) + @field_results = @field_definition.run_batching_extensions_before_resolve(authorized_objects, @arguments, ctx, @extended) do |objs, args| + if (added_extras = @extended.added_extras) + args = args.dup + added_extras.each { |e| args.delete(e) } + end + @field_definition.resolve_batch(self, objs, ctx, args) + end + @finish_extension_idx = 0 + else + @field_results = @field_definition.resolve_batch(self, authorized_objects, ctx, @arguments) + end + query.current_trace.end_execute_field(@field_definition, @arguments, authorized_objects, query, @field_results) + if any_lazy_results? + @runner.dataloader.lazy_at_depth(path.size, self) + elsif has_extensions + finish_extensions + else + build_results + end + end + + def any_lazy_results? + lazies = false if @runner.resolves_lazies # TODO extract this - lazies = false # TODO add a per-query cache of `.lazy?` @field_results.each do |field_result| if @runner.schema.lazy?(field_result) @@ -244,15 +285,49 @@ def execute_field end end end + end + lazies + end - if lazies - @runner.dataloader.lazy_at_depth(path.size, self) + def finish_extensions + ctx = @selections_step.query.context + memos = @extended.memos || EmptyObjects::EMPTY_HASH + while ext = @field_definition.extensions[@finish_extension_idx] + # These two are hardcoded here because of how they need to interact with runtime metadata. + # It would probably be better + case ext + when Schema::Field::ConnectionExtension + conns = ctx.schema.connections + @field_results = @field_results.map.each_with_index do |value, idx| + object = @extended.object[idx] + conn = conns.populate_connection(@field_definition, object, value, @arguments, ctx) + if conn + conn.was_authorized_by_scope_items = @was_scoped + end + conn + end + when Schema::Field::ScopeExtension + if @was_scoped.nil? + if (rt = @field_definition.type.unwrap).respond_to?(:scope_items) + @was_scoped = true + @field_results = @field_results.map { |v| v.nil? ? v : rt.scope_items(v, ctx) } + else + @was_scoped = false + end + end else - build_results + memo = memos[@finish_extension_idx] + @field_results = ext.after_resolve_batching(objects: @extended.object, arguments: @extended.arguments, context: ctx, values: @field_results, memo: memo) # rubocop:disable Development/ContextIsPassedCop + end + @finish_extension_idx += 1 + if any_lazy_results? + @runner.dataloader.lazy_at_depth(path.size, self) + return end - else - build_results end + + @finish_extension_idx = nil + build_results end def build_results @@ -298,8 +373,8 @@ def build_results # Do nothing -- it will enqueue itself later end else - results = @selections_step.results ctx = @selections_step.query.context + results = @selections_step.results field_result_idx = 0 i = 0 s = results.size diff --git a/lib/graphql/execution/batching/prepare_object_step.rb b/lib/graphql/execution/batching/prepare_object_step.rb index 2a95c3a5fb..66aa26a62c 100644 --- a/lib/graphql/execution/batching/prepare_object_step.rb +++ b/lib/graphql/execution/batching/prepare_object_step.rb @@ -62,6 +62,12 @@ def call end def authorize + if @field_resolve_step.was_scoped && !@resolved_type.reauthorize_scoped_objects + @authorized_value = @object + create_result + return + end + query = @field_resolve_step.selections_step.query begin query.current_trace.begin_authorized(@resolved_type, @object, query.context) diff --git a/lib/graphql/pagination/connection.rb b/lib/graphql/pagination/connection.rb index e22c625619..32ca414109 100644 --- a/lib/graphql/pagination/connection.rb +++ b/lib/graphql/pagination/connection.rb @@ -94,6 +94,8 @@ def initialize(items, parent: nil, field: nil, context: nil, first: nil, after: @was_authorized_by_scope_items = detect_was_authorized_by_scope_items end + attr_writer :was_authorized_by_scope_items + def was_authorized_by_scope_items? @was_authorized_by_scope_items end diff --git a/lib/graphql/pagination/connections.rb b/lib/graphql/pagination/connections.rb index 00a144b000..55754f471d 100644 --- a/lib/graphql/pagination/connections.rb +++ b/lib/graphql/pagination/connections.rb @@ -83,6 +83,38 @@ def wrap(field, parent, items, arguments, context) end end + def populate_connection(field, object, value, original_arguments, context) + if value.is_a? GraphQL::ExecutionError + # This isn't even going to work because context doesn't have ast_node anymore + context.add_error(value) + nil + elsif value.nil? + nil + elsif value.is_a?(GraphQL::Pagination::Connection) + # update the connection with some things that may not have been provided + value.context ||= context + value.parent ||= object + value.first_value ||= original_arguments[:first] + value.after_value ||= original_arguments[:after] + value.last_value ||= original_arguments[:last] + value.before_value ||= original_arguments[:before] + value.arguments ||= original_arguments # rubocop:disable Development/ContextIsPassedCop -- unrelated .arguments method + value.field ||= field + if field.has_max_page_size? && !value.has_max_page_size_override? + value.max_page_size = field.max_page_size + end + if field.has_default_page_size? && !value.has_default_page_size_override? + value.default_page_size = field.default_page_size + end + if (custom_t = context.schema.connections.edge_class_for_field(field)) + value.edge_class = custom_t + end + value + else + context.namespace(:connections)[:all_wrappers] ||= context.schema.connections.all_wrappers + context.schema.connections.wrap(field, object, value, original_arguments, context) + end + end # use an override if there is one # @api private def edge_class_for_field(field) diff --git a/lib/graphql/schema/field.rb b/lib/graphql/schema/field.rb index ff175bd679..3d4f67ded0 100644 --- a/lib/graphql/schema/field.rb +++ b/lib/graphql/schema/field.rb @@ -955,6 +955,33 @@ def with_extensions(obj, args, ctx) end end + public + + def run_batching_extensions_before_resolve(objs, args, ctx, extended, idx: 0, &block) + extension = @extensions[idx] + if extension + extension.resolve_batching(objects: objs, arguments: args, context: ctx) do |extended_objs, extended_args, memo| + if memo + memos = extended.memos ||= {} + memos[idx] = memo + end + + if (extras = extension.added_extras) + ae = extended.added_extras ||= [] + ae.concat(extras) + end + + extended.object = extended_objs + extended.arguments = extended_args + run_batching_extensions_before_resolve(extended_objs, extended_args, ctx, extended, idx: idx + 1, &block) + end + else + yield(objs, args) + end + end + + private + def run_extensions_before_resolve(obj, args, ctx, extended, idx: 0) extension = @extensions[idx] if extension diff --git a/lib/graphql/schema/field/connection_extension.rb b/lib/graphql/schema/field/connection_extension.rb index 3046399757..4c2f5cd2e8 100644 --- a/lib/graphql/schema/field/connection_extension.rb +++ b/lib/graphql/schema/field/connection_extension.rb @@ -21,45 +21,25 @@ def resolve(object:, arguments:, context:) yield(object, next_args, arguments) end + def resolve_batching(objects:, arguments:, context:) + next_args = arguments.dup + next_args.delete(:first) + next_args.delete(:last) + next_args.delete(:before) + next_args.delete(:after) + yield(objects, next_args, arguments) + end + def after_resolve(value:, object:, arguments:, context:, memo:) original_arguments = memo - # rename some inputs to avoid conflicts inside the block - maybe_lazy = value - value = nil - context.query.after_lazy(maybe_lazy) do |resolved_value| - value = resolved_value - if value.is_a? GraphQL::ExecutionError - # This isn't even going to work because context doesn't have ast_node anymore - context.add_error(value) - nil - elsif value.nil? - nil - elsif value.is_a?(GraphQL::Pagination::Connection) - # update the connection with some things that may not have been provided - value.context ||= context - value.parent ||= object.object - value.first_value ||= original_arguments[:first] - value.after_value ||= original_arguments[:after] - value.last_value ||= original_arguments[:last] - value.before_value ||= original_arguments[:before] - value.arguments ||= original_arguments # rubocop:disable Development/ContextIsPassedCop -- unrelated .arguments method - value.field ||= field - if field.has_max_page_size? && !value.has_max_page_size_override? - value.max_page_size = field.max_page_size - end - if field.has_default_page_size? && !value.has_default_page_size_override? - value.default_page_size = field.default_page_size - end - if (custom_t = context.schema.connections.edge_class_for_field(@field)) - value.edge_class = custom_t - end - value - else - context.namespace(:connections)[:all_wrappers] ||= context.schema.connections.all_wrappers - context.schema.connections.wrap(field, object.object, value, original_arguments, context) - end + context.query.after_lazy(value) do |resolved_value| + context.schema.connections.populate_connection(field, object.object, resolved_value, original_arguments, context) end end + + def after_resolve_batching(**kwargs) + raise "This should never be called -- it's hardcoded in execution instead." + end end end end diff --git a/lib/graphql/schema/field/scope_extension.rb b/lib/graphql/schema/field/scope_extension.rb index f031164f3d..ff5bcfd2c0 100644 --- a/lib/graphql/schema/field/scope_extension.rb +++ b/lib/graphql/schema/field/scope_extension.rb @@ -5,24 +5,33 @@ class Schema class Field class ScopeExtension < GraphQL::Schema::FieldExtension def after_resolve(object:, arguments:, context:, value:, memo:) - if value.nil? - value - else - ret_type = @field.type.unwrap - if ret_type.respond_to?(:scope_items) - scoped_items = ret_type.scope_items(value, context) - if !scoped_items.equal?(value) && !ret_type.reauthorize_scoped_objects - if (current_runtime_state = Fiber[:__graphql_runtime_info]) && - (query_runtime_state = current_runtime_state[context.query]) - query_runtime_state.was_authorized_by_scope_items = true + if object.is_a?(GraphQL::Schema::Object) + if value.nil? + value + else + return_type = field.type.unwrap + if return_type.respond_to?(:scope_items) + scoped_items = return_type.scope_items(value, context) + if !scoped_items.equal?(value) && !return_type.reauthorize_scoped_objects + if (current_runtime_state = Fiber[:__graphql_runtime_info]) && + (query_runtime_state = current_runtime_state[context.query]) + query_runtime_state.was_authorized_by_scope_items = true + end end + scoped_items + else + value end - scoped_items - else - value end + else + # TODO skip this entirely? + value end end + + def after_resolve_batching(**kwargs) + raise "This should never be called -- it's hardcoded in execution instead." + end end end end diff --git a/lib/graphql/schema/field_extension.rb b/lib/graphql/schema/field_extension.rb index 0b264af625..b16c445130 100644 --- a/lib/graphql/schema/field_extension.rb +++ b/lib/graphql/schema/field_extension.rb @@ -134,6 +134,24 @@ def resolve(object:, arguments:, context:) yield(object, arguments, nil) end + # Called before batch-resolving {#field}. It should either: + # + # - `yield` values to continue execution; OR + # - return something else to shortcut field execution. + # + # Whatever this method returns will be used for execution. + # + # @param objects [Array] The objects the field is being resolved on + # @param arguments [Hash] Ruby keyword arguments for resolving this field + # @param context [Query::Context] the context for this query + # @yieldparam objects [Array] The objects to continue resolving the field on. Length must be the same as passed-in `objects:` + # @yieldparam arguments [Hash] The keyword arguments to continue resolving with + # @yieldparam memo [Object] Any extension-specific value which will be passed to {#after_resolve} later + # @return [Array] The return value for this field, length matching passed-in `objects:`. + def resolve_batching(objects:, arguments:, context:) + yield(objects, arguments, nil) + end + # Called after {#field} was resolved, and after any lazy values (like `Promise`s) were synced, # but before the value was added to the GraphQL response. # @@ -148,6 +166,21 @@ def resolve(object:, arguments:, context:) def after_resolve(object:, arguments:, context:, value:, memo:) value end + + # Called after {#field} was batch-resolved, and after any lazy values (like `Promise`s) were synced, + # but before the value was added to the GraphQL response. + # + # Whatever this hook returns will be used as the return value. + # + # @param objects [Array] The objects the field is being resolved on + # @param arguments [Hash] Ruby keyword arguments for resolving this field + # @param context [Query::Context] the context for this query + # @param values [Array] Whatever the field returned, one for each of `objects` + # @param memo [Object] The third value yielded by {#resolve}, or `nil` if there wasn't one + # @return [Array] The return values for this field, length matching `objects:`. + def after_resolve_batching(objects:, arguments:, context:, values:, memo:) + values + end end end end diff --git a/lib/graphql/schema/member/base_dsl_methods.rb b/lib/graphql/schema/member/base_dsl_methods.rb index ea197ff8dd..53ff00aade 100644 --- a/lib/graphql/schema/member/base_dsl_methods.rb +++ b/lib/graphql/schema/member/base_dsl_methods.rb @@ -130,7 +130,7 @@ def authorized?(object, context) true end - def default_relay + def default_relay? false end diff --git a/spec/graphql/authorization_spec.rb b/spec/graphql/authorization_spec.rb index e484727770..d7507be082 100644 --- a/spec/graphql/authorization_spec.rb +++ b/spec/graphql/authorization_spec.rb @@ -537,7 +537,7 @@ def auth_execute(*args, **kwargs) # This switches on `context[:current_path]` which isn't implemented by Batching (yet?) expected_message = if TESTING_BATCHING - "`\"STREAM\"` was returned for `LandscapeFeature`, but this value was unauthorized. Update the field or resolver to return a different value in this case (or return `nil`)." + "Resolving Query.landscapeFeature: `\"STREAM\"` was returned for `LandscapeFeature`, but this value was unauthorized. Update the field or resolver to return a different value in this case (or return `nil`)." else "`Query.landscapeFeature` returned `\"STREAM\"` at `landscapeFeature`, but this value was unauthorized. Update the field or resolver to return a different value in this case (or return `nil`)." end diff --git a/spec/graphql/dataloader_spec.rb b/spec/graphql/dataloader_spec.rb index efaeb7b3b1..25b731dd23 100644 --- a/spec/graphql/dataloader_spec.rb +++ b/spec/graphql/dataloader_spec.rb @@ -1147,16 +1147,22 @@ def exec_query(query_string, schema: self.schema, context: nil, variables: nil) err = assert_raises GraphQL::Error do exec_query("{ testError }") end - - assert_equal "Field error", err.message + expected_message = "Field error" + if TESTING_BATCHING + expected_message = "Resolving Query.testError: #{expected_message}" + end + assert_equal expected_message, err.message end it "raises errors from sources" do err = assert_raises GraphQL::Error do exec_query("{ testError(source: true) }") end - - assert_equal "Source error on: [1]", err.message + expected_message = "Source error on: [1]" + if TESTING_BATCHING + expected_message = "Resolving Query.testError: #{expected_message}" + end + assert_equal expected_message, err.message end it "works with very very large queries" do diff --git a/spec/graphql/schema/field_extension_spec.rb b/spec/graphql/schema/field_extension_spec.rb index 4a31deaa08..d551ede604 100644 --- a/spec/graphql/schema/field_extension_spec.rb +++ b/spec/graphql/schema/field_extension_spec.rb @@ -7,18 +7,30 @@ class DoubleFilter < GraphQL::Schema::FieldExtension def after_resolve(object:, value:, arguments:, context:, memo:) value * 2 end + + def after_resolve_batching(objects:, values:, arguments:, context:, memo:) + values.map { |v| v * 2 } + end end class PowerOfFilter < GraphQL::Schema::FieldExtension def after_resolve(object:, value:, arguments:, context:, memo:) value**options.fetch(:power, 2) end + + def after_resolve_batching(objects:, values:, arguments:, context:, memo:) + values.map { |v| v**options.fetch(:power,2) } + end end class MultiplyByOption < GraphQL::Schema::FieldExtension def after_resolve(object:, value:, arguments:, context:, memo:) value * options[:factor] end + + def after_resolve_batching(objects:, values:, arguments:, context:, memo:) + values.map { |v| v * options[:factor] } + end end class MultiplyByArgument < GraphQL::Schema::FieldExtension @@ -31,9 +43,18 @@ def resolve(object:, arguments:, context:) yield(object, arguments, factor) end + def resolve_batching(objects:, arguments:, context:) + factor = arguments[:factor] + yield(objects, arguments, factor) + end + def after_resolve(object:, value:, arguments:, context:, memo:) value * memo end + + def after_resolve_batching(objects:, values:, arguments:, context:, memo:) + values.map { |v| v * memo } + end end class MultiplyByArgumentUsingResolve < GraphQL::Schema::FieldExtension @@ -45,7 +66,14 @@ def apply # This method's return value is passed along def resolve(object:, arguments:, context:) factor = arguments[:factor] - yield(object, arguments) * factor + result = yield(object, arguments) + result * factor + end + + def resolve_batching(objects:, arguments:, context:) + results = yield(objects, arguments) + factor = arguments[:factor] + results.map { |r| r * factor } end end @@ -60,8 +88,20 @@ def resolve(object:, arguments:, context:) yield(object, new_arguments, { original_arguments: arguments}) end + def resolve_batching(objects:, arguments:, context:) + new_arguments = arguments.dup + new_arguments.delete(:factor) + yield(objects, new_arguments, { original_arguments: arguments}) + end + def after_resolve(object:, value:, arguments:, context:, memo:) - value * memo[:original_arguments][:factor] + f = memo[:original_arguments][:factor] + value * f + end + + def after_resolve_batching(objects:, values:, arguments:, context:, memo:) + f = memo[:original_arguments][:factor] + values.map { |v| v * f } end end @@ -72,16 +112,31 @@ def resolve(object:, arguments:, **_rest) yield(object, new_args) end + def resolve_batching(objects:, arguments:, **_rest) + new_args = arguments.dup + new_args[:extended] = true + yield(objects, new_args) + end + def after_resolve(arguments:, context:, value:, **_rest) context[:extended_args] = arguments[:extended] value end + + def after_resolve_batching(objects:, values:, arguments:, context:, memo:) + context[:extended_args] = arguments[:extended] + values + end end class ShortcutsResolve < GraphQL::Schema::FieldExtension def resolve(**_args) options[:shortcut_value] end + + def resolve_batching(objects:, **args) + Array.new(objects.size, options[:shortcut_value]) + end end class ObjectClassExtension < GraphQL::Schema::FieldExtension @@ -89,9 +144,17 @@ def resolve(object:, **_args) object.class.name end + def resolve_batching(objects:, **_args) + objects.map { |o| o.class.name } + end + def after_resolve(value:, object:, **_args) [object.class.name, value] end + + def after_resolve_batching(objects:, values:, **_args) + objects.map.with_index { |o, i| [o.class.name, values[i]] } + end end class AddNestedExtensionExtension < GraphQL::Schema::FieldExtension @@ -103,10 +166,18 @@ class NestedExtension < GraphQL::Schema::FieldExtension def resolve(**_args) 1 end + + def resolve_batching(**_args) + 1 + end end end class BaseObject < GraphQL::Schema::Object + class BaseField < GraphQL::Schema::Field + include GraphQL::Execution::Batching::FieldCompatibility if TESTING_BATCHING + end + field_class(BaseField) end class Query < BaseObject @@ -172,17 +243,26 @@ def pass_thru_without_splat(input:) class Schema < GraphQL::Schema query(Query) + use GraphQL::Execution::Batching if TESTING_BATCHING end end def exec_query(query_str, **kwargs) - FilterTestSchema::Schema.execute(query_str, **kwargs) + if TESTING_BATCHING + FilterTestSchema::Schema.execute_batching(query_str, **kwargs) + else + FilterTestSchema::Schema.execute(query_str, **kwargs) + end end describe "object" do it "is the schema type object" do - res = exec_query("{ objectClassTest }") - assert_equal ["FilterTestSchema::Query", "FilterTestSchema::Query"], res["data"]["objectClassTest"] + res = exec_query("{ objectClassTest }", root_value: Object.new) + if TESTING_BATCHING + assert_equal ["Object", "Object"], res["data"]["objectClassTest"] + else + assert_equal ["FilterTestSchema::Query", "FilterTestSchema::Query"], res["data"]["objectClassTest"] + end end end diff --git a/spec/graphql/schema/member/scoped_spec.rb b/spec/graphql/schema/member/scoped_spec.rb index 8b5fc76074..4880263d90 100644 --- a/spec/graphql/schema/member/scoped_spec.rb +++ b/spec/graphql/schema/member/scoped_spec.rb @@ -4,6 +4,10 @@ describe GraphQL::Schema::Member::Scoped do class ScopeSchema < GraphQL::Schema class BaseObject < GraphQL::Schema::Object + class BaseField < GraphQL::Schema::Field + include GraphQL::Execution::Batching::FieldCompatibility if TESTING_BATCHING + end + field_class BaseField end class Item < BaseObject @@ -31,7 +35,7 @@ def self.authorized?(obj, ctx) if ctx[:allow_unscoped] true else - raise "This should never be called (#{ctx[:current_path]}, #{ctx[:current_field].path})" + raise "This should never be called (#{ctx[:current_path]}, #{ctx[:current_field]&.path})" end end @@ -121,6 +125,15 @@ def lazy_items query(Query) lazy_resolve(Proc, :call) + use(GraphQL::Execution::Batching) if TESTING_BATCHING + end + + def exec_query(...) + if TESTING_BATCHING + ScopeSchema.execute_batching(...) + else + ScopeSchema.execute(...) + end end describe ".scope_items(items, ctx)" do @@ -132,7 +145,7 @@ def get_item_names_with_context(ctx, field_name: "items") } } " - res = ScopeSchema.execute(query_str, context: ctx) + res = exec_query(query_str, context: ctx) res["data"][field_name].map { |i| i["name"] } end @@ -154,7 +167,7 @@ def get_item_names_with_context(ctx, field_name: "items") } } " - res = ScopeSchema.execute(query_str) + res = exec_query(query_str) refute res.key?("errors") assert_nil res.fetch("data").fetch("nilItems") end @@ -175,7 +188,7 @@ def get_item_names_with_context(ctx, field_name: "items") } } " - res = ScopeSchema.execute(query_str, context: {english: true}) + res = exec_query(query_str, context: {english: true}) names = res["data"]["itemsConnection"]["edges"].map { |e| e["node"]["name"] } assert_equal ["Paperclip"], names assert_equal 1, res.context[:scope_calls] @@ -189,7 +202,7 @@ def get_item_names_with_context(ctx, field_name: "items") } } " - res = ScopeSchema.execute(query_str, context: {english: true}) + res = exec_query(query_str, context: {english: true}) names = res["data"]["itemsConnection"]["nodes"].map { |e| e["name"] } assert_equal ["Paperclip"], names assert_equal 1, res.context[:scope_calls] @@ -208,7 +221,7 @@ def get_item_names_with_context(ctx, field_name: "items") } } " - res = ScopeSchema.execute(query_str, context: ctx) + res = exec_query(query_str, context: ctx) names = res["data"]["itemsConnection"]["edges"].map { |e| e["node"]["name"] } assert_equal ["Trombone", "Paperclip"], names assert_equal true, ctx[:proc_called] @@ -229,7 +242,7 @@ def get_item_names_with_context(ctx, field_name: "items") } } " - res = ScopeSchema.execute(query_str, context: { french: true }) + res = exec_query(query_str, context: { french: true }) names = res["data"]["lazyItemsConnection"]["edges"].map { |e| e["node"]["name"] } assert_equal ["Trombone"], names names2 = res["data"]["lazyItems"].map { |e| e["name"] } @@ -238,7 +251,7 @@ def get_item_names_with_context(ctx, field_name: "items") it "doesn't shortcut authorization when `reauthorize_scoped_objects(true)`" do query_str = "{ reauthorizeItems { name } }" - res = ScopeSchema.execute(query_str, context: { french: true }) + res = exec_query(query_str, context: { french: true }) assert_equal 1, res["data"]["reauthorizeItems"].length assert_equal 1, res.context[:scope_calls] assert res.context[:was_authorized] @@ -257,7 +270,7 @@ def get_item_names_with_context(ctx, field_name: "items") } } " - res = ScopeSchema.execute(query_str, context: {first_letter: "T"}) + res = exec_query(query_str, context: {first_letter: "T"}) things = res["data"]["things"] assert_equal [{ "name" => "Trombone" }, {"designation" => "Turbine"}], things end @@ -301,6 +314,11 @@ def get_item_names_with_context(ctx, field_name: "items") describe "skipping authorization on scoped lists" do class SkipAuthSchema < GraphQL::Schema class Book < GraphQL::Schema::Object + class BaseField < GraphQL::Schema::Field + include(GraphQL::Execution::Batching::FieldCompatibility) if TESTING_BATCHING + end + field_class(BaseField) + def self.authorized?(obj, ctx) ctx[:auth_log] << [:authorized?, obj[:title]] true @@ -323,31 +341,48 @@ class ReauthorizedBook < Book end class Query < GraphQL::Schema::Object - field :book, Book + field :book, Book, resolve_static: true def book + self.class.book(context) + end + + def self.book(_context) { title: "Nonsense Omnibus"} end - field :books, [Book] + field :books, [Book], resolve_static: true def books + self.class.books(context) + end + + def self.books(_context) [{ title: "Jayber Crow" }, { title: "Hannah Coulter" }] end - field :skip_authorization_books, [SkipAuthorizationBook], resolver_method: :books + field :skip_authorization_books, [SkipAuthorizationBook], resolver_method: :books, resolve_static: :books - field :reauthorized_books, [ReauthorizedBook], resolver_method: :books + field :reauthorized_books, [ReauthorizedBook], resolver_method: :books, resolve_static: :books - field :skip_authorization_books_connection, SkipAuthorizationBook.connection_type, resolver_method: :books + field :skip_authorization_books_connection, SkipAuthorizationBook.connection_type, resolver_method: :books, resolve_static: :books end query(Query) + use(GraphQL::Execution::Batching) if TESTING_BATCHING + end + + def exec_skip_auth(...) + if TESTING_BATCHING + SkipAuthSchema.execute_batching(...) + else + SkipAuthSchema.execute(...) + end end it "runs both authorizations by default" do log = [] - SkipAuthSchema.execute("{ book { title } books { title } }", context: { auth_log: log }) + exec_skip_auth("{ book { title } books { title } }", context: { auth_log: log }) expected_log = [ [:authorized?, "Nonsense Omnibus"], [:scope_items, ["Jayber Crow", "Hannah Coulter"]], @@ -359,13 +394,13 @@ def books it "skips self.authorized? when configured" do log = [] - SkipAuthSchema.execute("{ skipAuthorizationBooks { title } }", context: { auth_log: log }) + exec_skip_auth("{ skipAuthorizationBooks { title } }", context: { auth_log: log }) assert_equal [[:scope_items, ["Jayber Crow", "Hannah Coulter"]]], log end it "can be re-enabled in subclasses" do log = [] - SkipAuthSchema.execute("{ reauthorizedBooks { title } }", context: { auth_log: log }) + exec_skip_auth("{ reauthorizedBooks { title } }", context: { auth_log: log }) expected_log = [ [:scope_items, ["Jayber Crow", "Hannah Coulter"]], [:authorized?, "Jayber Crow"], @@ -377,7 +412,7 @@ def books it "skips auth in connections" do log = [] - SkipAuthSchema.execute("{ skipAuthorizationBooksConnection(first: 10) { nodes { title } } }", context: { auth_log: log }) + exec_skip_auth("{ skipAuthorizationBooksConnection(first: 10) { nodes { title } } }", context: { auth_log: log }) assert_equal [[:scope_items, ["Jayber Crow", "Hannah Coulter"]]], log end end diff --git a/spec/graphql/schema/resolver_spec.rb b/spec/graphql/schema/resolver_spec.rb index aabb1d861f..c4b73b9d45 100644 --- a/spec/graphql/schema/resolver_spec.rb +++ b/spec/graphql/schema/resolver_spec.rb @@ -94,6 +94,11 @@ def resolve(object:, arguments:, **rest) name = yield(object, arguments) "#{options[:greeting]}, #{name}!" end + + def resolve_batching(objects:, arguments:, **rest) + names = yield(objects, arguments) + names.map { |n| "#{options[:greeting]}, #{n}!" } + end end class ResolverWithExtension < BaseResolver diff --git a/spec/support/connection_assertions.rb b/spec/support/connection_assertions.rb index 7b3c7240a0..4a3956b52d 100644 --- a/spec/support/connection_assertions.rb +++ b/spec/support/connection_assertions.rb @@ -53,29 +53,16 @@ class << self self.connection_class = connection_class self.total_count_connection_class = total_count_connection_class - base_field = Class.new(GraphQL::Schema::Field) do - include GraphQL::Execution::Batching::FieldCompatibility if TESTING_BATCHING - end - - base_object = Class.new(GraphQL::Schema::Object) do - field_class(base_field) - end - + base_object = GraphQL::Schema::Object item = Class.new(base_object) do graphql_name "Item" - field :name, String, null: false + field :name, String, null: false, hash_key: :name end custom_item_edge = Class.new(GraphQL::Types::Relay::BaseEdge) do node_type item graphql_name "CustomItemEdge" - field :parent_class, String, null: false - - def parent_class - object.parent.class.inspect - end - field :node_class_name, String, null: false end @@ -83,6 +70,10 @@ def parent_class def node_class_name node.class.name end + + def parent_class + parent.class.inspect + end end custom_item_connection = Class.new(GraphQL::Types::Relay::BaseConnection) do @@ -104,67 +95,97 @@ def node_class_name query = Class.new(base_object) do graphql_name "Query" - field :items, item.connection_type, null: false do + field :items, item.connection_type, null: false, resolve_static: true do argument :max_page_size_override, Integer, required: false argument :default_page_size_override, Integer, required: false end - def items(max_page_size_override: :no_value, default_page_size_override: :no_value) + def items(**kwargs) + self.class.items(context, **kwargs) + end + + def self.items(context, max_page_size_override: :no_value, default_page_size_override: :no_value) if max_page_size_override == :no_value && default_page_size_override == :no_value # don't manually apply the wrapper when it's not required -- check automatic wrapping. - get_items + get_items(context) else args = {} args[:max_page_size] = max_page_size_override if max_page_size_override != :no_value args[:default_page_size] = default_page_size_override if default_page_size_override != :no_value - context.schema.connection_class.new(get_items, **args) + context.schema.connection_class.new(get_items(context), **args) end end - field :custom_items, custom_item_connection, null: false + field :custom_items, custom_item_connection, null: false, resolve_static: true def custom_items - context.schema.total_count_connection_class.new(get_items) + self.class.custom_items(context) + end + + def self.custom_items(context) + context.schema.total_count_connection_class.new(get_items(context)) end if connection_class - field :custom_items_with_custom_edge, custom_items_with_custom_edge, null: false + field :custom_items_with_custom_edge, custom_items_with_custom_edge, null: false, resolve_static: true def custom_items_with_custom_edge - context.schema.custom_connection_class_with_custom_edge.new(get_items) + self.class.custom_items_with_custom_edge(context) + end + + def self.custom_items_with_custom_edge(context) + context.schema.custom_connection_class_with_custom_edge.new(get_items(context)) end end - field :limited_items, item.connection_type, null: false, max_page_size: 2 + field :limited_items, item.connection_type, null: false, max_page_size: 2, resolve_static: true def limited_items - get_items + self.class.limited_items(context) end - field :preloaded_items, item.connection_type + def self.limited_items(context) + get_items(context) + end + + field :preloaded_items, item.connection_type, resolve_static: true def preloaded_items - relation = get_items + self.class.preloaded_items(context) + end + + def self.preloaded_items(context) + relation = get_items(context) relation.load # force the unbounded relation to load from the database relation end - field :unbounded_items, item.connection_type, max_page_size: nil, default_page_size: nil + field :unbounded_items, item.connection_type, max_page_size: nil, default_page_size: nil, resolve_static: true def unbounded_items - get_items + self.class.unbounded_items(context) + end + + def self.unbounded_items(context) + get_items(context) end - field :offset_items, item.connection_type + field :offset_items, item.connection_type, resolve_static: true def offset_items - get_items.offset(2) + self.class.offset_items(context) end - private + def self.offset_items(context) + get_items(context).offset(2) + end + + class << self + private - def get_items - context.schema.get_items.call + def get_items(context) + context.schema.get_items.call + end end end @@ -177,11 +198,11 @@ def get_items def self.included(child_module) child_module.class_exec do - def exec_query(query_str, variables) + def exec_query(query_str, root_value: nil, **variables) if TESTING_BATCHING - schema.execute_batching(query_str, variables: variables) + schema.execute_batching(query_str, root_value: root_value, variables: variables) else - schema.execute(query_str, variables: variables) + schema.execute(query_str, root_value: root_value, variables: variables) end end @@ -329,7 +350,7 @@ def assert_names(expected_names, result) end it "applies default_page_size to first when first and last are unspecified" do - res = exec_query(query_str, {}) + res = exec_query(query_str) # Neither first nor last was provided, so default_page_size was applied. assert_names(["Avocado", "Beet", "Cucumber", "Dill"], res) assert_equal true, get_page_info(res, "hasNextPage") @@ -359,14 +380,14 @@ def assert_names(expected_names, result) } GRAPHQL - res = exec_query(query_str, {}) + res = exec_query(query_str) assert_equal 10, res["data"]["unboundedItems"]["nodes"].size end end describe "customizing" do it "serves custom fields" do - res = schema.execute <<-GRAPHQL, root_value: :something + res = exec_query <<-GRAPHQL, root_value: :something { items: customItems(first: 3) { nodes { @@ -397,7 +418,7 @@ def assert_names(expected_names, result) it "uses custom ::Edge classes" do skip "Not supported" if schema.connection_class.nil? - res = schema.execute <<-GRAPHQL, root_value: :something + res = exec_query <<-GRAPHQL, root_value: :something { items: customItemsWithCustomEdge(first: 3) { nodes { @@ -421,7 +442,7 @@ def assert_names(expected_names, result) it "applies local max-page-size settings" do # Smaller default: - res = schema.execute <<-GRAPHQL + res = exec_query <<-GRAPHQL { items(first: 10, maxPageSizeOverride: 3) { nodes { @@ -439,7 +460,7 @@ def assert_names(expected_names, result) assert_names(["Avocado", "Beet", "Cucumber"], res) # Larger than the default: - res = schema.execute <<-GRAPHQL + res = exec_query <<-GRAPHQL { items(first: 10, maxPageSizeOverride: 7) { nodes { @@ -457,7 +478,7 @@ def assert_names(expected_names, result) assert_names(["Avocado", "Beet", "Cucumber", "Dill", "Eggplant", "Fennel", "Ginger"], res) # Unlimited - res = schema.execute <<-GRAPHQL + res = exec_query <<-GRAPHQL { items(first: 100, maxPageSizeOverride: null) { nodes { @@ -475,7 +496,7 @@ def assert_names(expected_names, result) end it "applies a field-level max-page-size configuration" do - res = schema.execute <<-GRAPHQL + res = exec_query <<-GRAPHQL { items: limitedItems(first: 10) { nodes {