I would suggest splitting the query sequences up into smaller batches. If you output tabular results from BLASTN, those could be concatenated after.
Tools involved:
- Split file to dataset collection
- BLASTN
- Concatenate datasets tail-to-head or Collapse Collection into single dataset in order of the collection
You might need to experiment to see how many collection elements (files) are needed to break up the data into jobs that will run on the public clusters. Also, be careful with the BLASTN parameters – it is very easy to “blow up” the results by setting the match criteria as too permissive. You can always filter your results and run BLASTN again on a smaller set of target sequences if interested in sub hits (get rid of reads that only capture non-specific hits).
Hope this helps!