问题描述
要求是使用流使用多个匹配条件从地图列表"中获取所有匹配和不匹配的记录.即,不是使用单个过滤器来仅比较电子邮件",而是需要比较两个列表以匹配记录,并比较多个过滤谓词来比较电子邮件和ID.
Requirement is to get all the matching and non matching records from the List of Map using multiple matching criteria using the streams. i.e Instead of having a single filter to compare only "Email", required to compare two list for matching records with multiple filter predicate for comparing Email and Id both.
列表1:
[{"Email","[email protected]", "Id": "A1"},
{"Email":"[email protected]","id":"A2"}]
列表2:
[{"Email","[email protected]", "Id": "A1"},
{"Email":"[email protected]","id":"A2"},
{"Email":"[email protected]","id":"B1"}]
使用流,我可以使用Email上的Single filter谓词查找匹配和不匹配的记录:匹配记录:
Using streams I'm able to find the matching and non matching records using Single filter predicate on Email:Matching Records :
[{"Email","[email protected]", "Id": "A1"},
{"Email":"[email protected]","id":"A2"}]
不匹配的记录:
[{"Email":"[email protected]","id":"B1"}]]
是否可以同时比较电子邮件和ID比较,而不只是电子邮件
Is there a way to compare both Email and Id comparison instead of just Email
dbRecords.parallelStream().filter(searchData ->
inputRecords.parallelStream().anyMatch(inputMap ->
searchData.get("Email").equals(inputMap.get("Email")))).
collect(Collectors.toList());
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
public class ListFiltersToGetMatchingRecords {
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
List<Map<String, Object>> dbRecords = createDbRecords();
List<Map<String, Object>> inputRecords = createInputRecords();
List<Map<String,Object>> matchinRecords = dbRecords.parallelStream().filter(searchData ->
inputRecords.parallelStream().anyMatch(inputMap ->
searchData.get("Email").equals(inputMap.get("Email")))).
collect(Collectors.toList());
List<Map<String,Object>> notMatchinRecords = inputRecords.parallelStream().filter(searchData ->
dbRecords.parallelStream().noneMatch( inputMap ->
searchData.get("Email").equals(inputMap.get("Email"))
)).collect(Collectors.toList());
long endTime = System.currentTimeMillis();
System.out.println("Matching Records: " + matchinRecords.size());
matchinRecords.forEach(record -> {
System.out.println(record.get("Email"));
});
System.out.println("Non Matching Records" + notMatchinRecords.size());
notMatchinRecords.forEach(record -> {
System.out.println(record.get("Email"));
});
System.out.println("Non Matching Records" + notMatchinRecords.size());
System.out.println("Matching Records: " + matchinRecords.size());
System.out.println("TotalTImeTaken =" + ((endTime-startTime) /1000) + "sec");
}
private static List<Map<String, Object>> createDbRecords() {
List<Map<String, Object>> dbRecords = new ArrayList<>();
for(int i =0; i< 100; i+=2) {
Map<String, Object> dbRecord = new HashMap<>();
dbRecord.put("Email","naveen" + i +"@gmail.com");
dbRecord.put("Id", "ID" + i);
dbRecords.add(dbRecord);
}
return dbRecords;
}
private static List<Map<String, Object>> createInputRecords() {
List<Map<String, Object>> dbRecords = new ArrayList<>();
for(int i =0; i< 100; i++) {
Map<String, Object> dbRecord = new HashMap<>();
dbRecord.put("Email", "naveen" + i +"@gmail.com");
dbRecord.put("ID", "ID" + i);
dbRecords.add(dbRecord);
}
return dbRecords;
}
}
推荐答案
如果您注重性能,则不应将线性搜索与其他线性搜索结合使用;当列表变大时,使用并行处理无法解决由此带来的时间复杂性.
If you care for performance, you should not combine a linear search with another linear search; with the resulting time complexity can’t be fixed with parallel processing when the lists get large.
您应该构建一个数据结构,该结构首先允许高效查找:
You should built a data structure which allows efficient lookups first:
Map<List<?>,Map<String, Object>> inputKeys = inputRecords.stream()
.collect(Collectors.toMap(
m -> Arrays.asList(m.get("ID"),m.get("Email")),
m -> m,
(a,b) -> { throw new IllegalStateException("duplicate "+a+" and "+b); },
LinkedHashMap::new));
List<Map<String,Object>> matchinRecords = dbRecords.stream()
.filter(m -> inputKeys.containsKey(Arrays.asList(m.get("ID"),m.get("Email"))))
.collect(Collectors.toList());
matchinRecords.forEach(m -> inputKeys.remove(Arrays.asList(m.get("ID"),m.get("Email"))));
List<Map<String,Object>> notMatchinRecords = new ArrayList<>(inputKeys.values());
此解决方案将保留 Map
s的身份.
This solution will keep the identity of the Map
s.
如果您只对与"Email"
键相关联的值感兴趣,那么它将简单得多:
If you are only interested in the values associated with the "Email"
key, it would be much simpler:
Map<Object,Object> notMatchinRecords = inputRecords.stream()
.collect(Collectors.toMap(
m -> m.get("ID"),
m -> m.get("Email"),
(a,b) -> { throw new IllegalStateException("duplicate"); },
LinkedHashMap::new));
Object notPresent = new Object();
Map<Object,Object> matchinRecords = dbRecords.stream()
.filter(m -> notMatchinRecords.getOrDefault(m.get("ID"), notPresent)
.equals(m.get("Email")))
.collect(Collectors.toMap(
m -> m.get("ID"),
m -> m.get("Email"),
(a,b) -> { throw new IllegalStateException("duplicate"); },
LinkedHashMap::new));
notMatchinRecords.keySet().removeAll(matchinRecords.keySet());
System.out.println("Matching Records: " + matchinRecords.size());
matchinRecords.forEach((id,email) -> System.out.println(email));
System.out.println("Non Matching Records" + notMatchinRecords.size());
notMatchinRecords.forEach((id,email) -> System.out.println(email));
第一个变体可以扩展为轻松支持更多/其他地图条目:
The first variant can get extended to support more/other map entries easily:
List<String> keys = Arrays.asList("ID", "Email");
Function<Map<String,Object>,List<?>> getKey
= m -> keys.stream().map(m::get).collect(Collectors.toList());
Map<List<?>,Map<String, Object>> inputKeys = inputRecords.stream()
.collect(Collectors.toMap(
getKey,
m -> m,
(a,b) -> { throw new IllegalStateException("duplicate "+a+" and "+b); },
LinkedHashMap::new));
List<Map<String,Object>> matchinRecords = dbRecords.stream()
.filter(m -> inputKeys.containsKey(getKey.apply(m)))
.collect(Collectors.toList());
matchinRecords.forEach(m -> inputKeys.remove(getKey.apply(m)));
List<Map<String,Object>> notMatchinRecords = new ArrayList<>(inputKeys.values());
这篇关于如何在Java8流中比较Map的两个列表以识别具有多个过滤谓词的匹配记录和不匹配记录的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!